[strings] Switch to Weblate
https://github.com/orgs/organicmaps/discussions/4515 Weblate works fine. There is no longer a need to maintain a homegrown, non-standard translation toolchain. Categories are not in Weblate yet, but they weren't supported by the previous toolkit too. This issue can be addressed later. Signed-off-by: Roman Tsisyk <roman@tsisyk.com>
This commit is contained in:
parent
058644ecef
commit
6e37398cf1
9 changed files with 90 additions and 1458 deletions
31
.github/workflows/strings-check.yaml
vendored
31
.github/workflows/strings-check.yaml
vendored
|
@ -1,31 +0,0 @@
|
|||
name: Validate translation strings
|
||||
on:
|
||||
workflow_dispatch: # Manual trigger
|
||||
pull_request:
|
||||
paths:
|
||||
- .github/workflows/strings-check.yaml # Run check on self change
|
||||
- data/strings/strings.txt
|
||||
- data/strings/types_strings.txt
|
||||
- data/strings/sound.txt
|
||||
- data/countries_names.txt
|
||||
- iphone/plist.txt
|
||||
- tools/python/strings_utils.py
|
||||
|
||||
jobs:
|
||||
validate-translation-strings:
|
||||
name: Validate translation strings
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3'
|
||||
|
||||
- name: Validate string files
|
||||
shell: bash
|
||||
run: |
|
||||
for f in data/strings/strings.txt data/strings/types_strings.txt data/strings/sound.txt data/countries_names.txt iphone/plist.txt; do
|
||||
./tools/python/strings_utils.py --validate $f -o
|
||||
done;
|
||||
git diff --exit-code
|
3
.gitmodules
vendored
3
.gitmodules
vendored
|
@ -7,9 +7,6 @@
|
|||
[submodule "3party/protobuf/protobuf"]
|
||||
path = 3party/protobuf/protobuf
|
||||
url = https://github.com/organicmaps/protobuf.git
|
||||
[submodule "tools/twine"]
|
||||
path = tools/twine
|
||||
url = https://github.com/organicmaps/twine.git
|
||||
[submodule "3party/Vulkan-Headers"]
|
||||
path = 3party/Vulkan-Headers
|
||||
url = https://github.com/KhronosGroup/Vulkan-Headers.git
|
||||
|
|
|
@ -1,115 +1,121 @@
|
|||
# Translations
|
||||
|
||||
## Help us to review/proofread translations
|
||||
Translations are managed through [Weblate][weblate]. Please [contribute][contribute] translations via the [Weblate][weblate], and the system and maintainers will handle the rest.
|
||||
|
||||
You can join our [GitHub translation teams](https://github.com/orgs/organicmaps/teams/translations/teams),
|
||||
so any contributor can tag all teams (or a specific language team) to get help with the review.
|
||||
## Components
|
||||
|
||||
Please respond in the relevant [GitHub discussion](https://github.com/orgs/organicmaps/discussions/8538), or let us know at hello@organicmaps.app
|
||||
The project consists of multiple components, each with its own translation files.
|
||||
|
||||
## Contribute translations directly
|
||||
| Weblate Component | Description | Translation Files |
|
||||
| --------------------------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
|
||||
| [Android][android_weblate] | UI strings | [android/app/src/main/res/values\*/strings.xml][android_git] ([en][android_git_en]) |
|
||||
| [Android feature types][android_typestrings_weblate] | Map feature types | [android/app/src/main/res/values\*/type_strings.xml][android_git] ([en][android_type_strings_git_en]) |
|
||||
| [iOS][ios_weblate] | UI strings | [iphone/Maps/LocalizedStrings/\*.lproj/Localizable.strings][ios_git] ([en][ios_git_en]) |
|
||||
| [iOS Type Strings][ios_typestrings_weblate] | OpenStreetMap Types | [iphone/Maps/LocalizedStrings/\*.lproj/LocalizableTypes.strings][ios_git] ([en][ios_typestrings_git_en]) |
|
||||
| [iOS Plurals][ios_plurals_weblate] | UI strings (plurals) | [iphone/Maps/LocalizedStrings/\*.lproj/Localizable.stringsdict][ios_git] ([en][ios_plurals_git_en]) |
|
||||
| [iOS Plist][ios_plist_weblate] | UI strings (system-level) | [iphone/Maps/LocalizedStrings/\*.lproj/InfoPlist.strings][ios_git] ([en][ios_plist_git_en]) |
|
||||
| [TTS][tts_weblate] | Voice announcement strings for navigation directions (TTS) | [data/sound-strings/\*.json][tts_git] ([en][tts_git_en]) |
|
||||
| [Countries][countries_weblate] | Country names for downloader | [data/country-strings/\*.json][countries_git] ([en][countries_git_en]) |
|
||||
| Search keywords | Search keywords/aliases/synonyms | [data/categories.txt][categories_git] |
|
||||
| Search keywords (cuisines) | Search keywords for cuisine types | [data/categories_cuisines.txt][categories_cuisines_git] |
|
||||
| AppStore Descriptions | AppStore descriptions | [iphone/metadata][appstore_git] ([en][appstore_git_en]) |
|
||||
| Android Stores Descriptions | Google, F-Droid, Huawei store descriptions | [android/app/src/fdroid/play][googleplay_git] ([en][googleplay_git_en]) |
|
||||
| [Website][website_weblate] | Website content | [organicmaps/website][website_git] ([see details][website_guide]) |
|
||||
|
||||
Adding and updating translations is easy!
|
||||
1. Change the translation file you want, e.g. [strings.txt](../data/strings/strings.txt) ([raw text version](https://raw.githubusercontent.com/organicmaps/organicmaps/master/data/strings/strings.txt))
|
||||
2. Commit your string changes with the title `[strings] {description of changes}`
|
||||
3. (Optional) run the `tools/unix/generate_localizations.sh` script
|
||||
4. (Optional) Commit the updated files with the title `[strings] Regenerated`
|
||||
5. Send a pull request!
|
||||
Components without links haven't been integrated into Weblate and must be translated directly via [GitHub Pull Requests](CONTRIBUTING.md).
|
||||
|
||||
Please make sure to add a [Developers Certificate of Origin](CONTRIBUTING.md#legal-requirements) to your commit descriptions.
|
||||
## Translating
|
||||
|
||||
## Requirements
|
||||
### Workflow
|
||||
|
||||
To run the `tools/unix/generate_localizations.sh` script, it is necessary to have installed `ruby`.
|
||||
Translations are managed through [Weblate][weblate]. Direct submissions to this repository are not recommended but possible in specific cases (like batch-changes). Please prefer using the Weblate for translations whenever possible. Weblate periodically creates pull requests, which [@organicmaps/mergers][mergers] review and merge as usual.
|
||||
|
||||
## Translation files
|
||||
### Cross-Component Synchronization
|
||||
|
||||
- Main:
|
||||
- Application UI strings: [`data/strings/strings.txt`](../data/strings/strings.txt)
|
||||
- A few iOS specific strings: [`iphone/plist.txt`](../iphone/plist.txt)
|
||||
Android and iOS share most of the strings. Weblate automatically syncs translations between components (e.g., from Android to iOS and vice versa), so updating a string in one place is usually sufficient.
|
||||
|
||||
- POI Categories:
|
||||
- Names of map features/types: [`data/strings/types_strings.txt`](../data/strings/types_strings.txt)
|
||||
- Search keywords/aliases/synonyms for map features: [`data/categories.txt`](../data/categories.txt)
|
||||
## Machine Translation
|
||||
|
||||
The POI definitions in the [OpenStreetMap Wiki](https://wiki.openstreetmap.org/) help finding the most suitable translation. Both POI files should be kept in sync, so make sure that every category name is also contained in the coresponding search keyword list. Strings in _categories.txt_ should, however, not contain common tokens like e.g. Shop, Store or Center as separate words.
|
||||
Weblate is configured to generate machine translations using the best available tools. Auto-translated entries are added as suggestions.
|
||||
|
||||
- Additional:
|
||||
- Text-to-speech strings for navigation: [`data/strings/sound.txt`](../data/strings/sound.txt)
|
||||
### Failing checks
|
||||
|
||||
- Android stores description: [`android/app/src/fdroid/play/`](../android/app/src/fdroid/play/)
|
||||
- Apple App Store description: [`iphone/metadata/`](../iphone/metadata/)
|
||||
Please review any issues flagged by automated checks, such as missing placeholders, inconsistencies, and other potential errors. Use the filter [`has:check AND state:>=translated language:de`][failing_checks], replacing `de` with your target language.
|
||||
|
||||
- Search keywords for popular brands: [`data/categories_brands.txt`](../data/categories_brands.txt)
|
||||
- Search keywords for cuisine types: [`data/categories_cuisines.txt`](../data/categories_cuisines.txt)
|
||||
## Developing
|
||||
|
||||
- Country / map region names: [`data/countries_names.txt`](../data/countries_names.txt)
|
||||
### Workflow
|
||||
|
||||
- [other strings](STRUCTURE.md#strings-and-translations) files
|
||||
Translations are handled by the translation team via [**Weblate**][weblate], with no direct developer involvement required. Developers are only responsible for adding English base strings to the source file (see [Components](#components)). Weblate manages the rest. If you're confident in a language, feel free to contribute translations, but please avoid adding machine translations or translating languages you are not familiar with.
|
||||
|
||||
Language codes used are from [ISO 639-1 standard](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes).
|
||||
If a string is not translated into a particular language then it falls back to English or a "parent" language (e.g. `es-MX` falls back to `es`).
|
||||
### Tools
|
||||
|
||||
## Tools
|
||||
Android developers can utilize the built-in features of Android Studio to add and modify strings efficiently. iOS developers are advised to edit `Localizable.strings` as a text file, as Xcode’s interface only supports "String Catalog," which is not currently in use. JSON files can be modified using any text editor. To ensure consistency, always follow the established structure and include a comment when adding new strings.
|
||||
|
||||
To find strings without translations substitute `ar` with your language code and run the following script:
|
||||
```
|
||||
tools/python/strings_utils.py -l ar -pm
|
||||
```
|
||||
By default, it searches `strings.txt`, to check `types_strings.txt` add a `-t` option.
|
||||
There are many more other options, e.g. print various translation statistics, validate and re-format translation files.
|
||||
Check `tools/python/strings_utils.py -h` to see all of them.
|
||||
### Cross-Component Synchronization
|
||||
|
||||
To check consistency of types_strings.txt with categories.txt run:
|
||||
```
|
||||
ruby tools/ruby/category_consistency/check_consistency.rb
|
||||
```
|
||||
When adding new strings, first check the base file of the component for existing ones. If no relevant strings are found, look for them on the corresponding platform (e.g., iOS when adding Android strings or vice versa). To maintain consistency across platforms, always reuse the existing string key from the other platform with the same English base string.
|
||||
|
||||
## Automatic translations
|
||||
## Maintaining
|
||||
|
||||
In some cases automatically translated strings are better than no translation at all.
|
||||
There are two scripts to automate given string's translation into multiple languages.
|
||||
Please [install Translate Shell](https://www.soimort.org/translate-shell/#installation) first to be able to run them.
|
||||
## Under the Hood
|
||||
|
||||
### DeepL + Google Translate fallback
|
||||
Weblate maintains an internal copy of the Git repository. The repository URL can be found under _Manage → Repository Maintenance → Weblate Repository_. All components, except for the website, share the same internal Weblate repository.
|
||||
|
||||
The first one uses free DeepL API where possible and provides a significantly better quality translations.
|
||||
It requires registering a [DeepL account](https://www.deepl.com/pro#developer) and [getting API key](https://www.deepl.com/account/summary).
|
||||
You may be asked for a credit card for verification, but it won't be charged.
|
||||
Requires Python version >= 3.7.
|
||||
Translations are extracted from the repository and stored in an internal database, which is used by the Weblate UI. Every 24 hours, this internal database is synchronized back to the internal repository. This process can also be triggered manually via _Manage → Repository Maintenance → Commit_.
|
||||
|
||||
```bash
|
||||
export DEEPL_FREE_API_KEY=<your DeepL API key here>
|
||||
# Generates translations in both categories.txt and strings.txt formats at the same time:
|
||||
tools/python/translate.py English text to translate here
|
||||
# Use two-letter language codes with a colon for a non-English source language:
|
||||
tools/python/translate.py de:German text to translate here
|
||||
```
|
||||
After committing changes from the internal database to the internal repository, Weblate pushes all updates to the `weblate-i18n` branch of the main GitHub repository and creates or updates a pull request (PR) to `master`. This operation can be manually triggered via _Manage → Repository Maintenance → Push_.
|
||||
|
||||
### Google Translate only
|
||||
### Reviewing PRs
|
||||
|
||||
The second one is not recommended, it uses Google API and sometimes translations are incorrect.
|
||||
Also it does not support European Portuguese (pt or pt-PT), and always generates Brazil Portuguese.
|
||||
Translations are intended to be reviewed by the community on Weblate. However, if it's a user's first contribution or if there is any doubt, a quick scan and comparison with the English source can be useful.
|
||||
|
||||
```bash
|
||||
# Generates translations in categories.txt format
|
||||
tools/unix/translate_categories.sh "Route"
|
||||
# Translations in strings.txt format
|
||||
DELIM=" = " tools/unix/translate_categories.sh "Route"
|
||||
```
|
||||
It is recommended to add comments directly on Weblate, as translators primarily work within that platform. If the contributor has a GitHub account, you may tag them in the pull request, but there is no guarantee that they will respond.
|
||||
|
||||
## Technical details
|
||||
### Resolving Conflicts
|
||||
|
||||
Most of the translation files (strings, types_strings...) are in Twine file format ([syntax reference](https://github.com/organicmaps/twine/blob/organicmaps/README.md)).
|
||||
OM uses a custom version of the [Twine](https://github.com/organicmaps/twine)
|
||||
tool (resides in `tools/twine/` submodule) to generate platform-native (Android, iOS)
|
||||
localization files from a single translation file.
|
||||
The recommended approach for resolving conflicts is as follows:
|
||||
|
||||
The `tools/unix/generate_localizations.sh` script launches this conversion
|
||||
(and installs Twine beforehand if necessary).
|
||||
1. Commit all changes from the internal database to the internal Git repository:
|
||||
_Manage → Repository Maintenance → Commit (button)_.
|
||||
2. Update the `weblate-i18n` branch on GitHub:
|
||||
_Manage → Repository Maintenance → Push (button)_.
|
||||
3. Locally checkout the `weblate-i18n` branch.
|
||||
4. Rebase it onto `master`, resolving any conflicts during the process.
|
||||
5. Push the branch to GitHub to update the pull request, then merge the branch or PR into `master`.
|
||||
6. Reset Weblate to sync changes from GitHub:
|
||||
_Manage → Repository Maintenance → Reset (button)_.
|
||||
|
||||
Search keywords translation files use a custom format described in the beginning of `data/categories.txt`.
|
||||
|
||||
A `tools/python/clean_strings_txt.py` script is used to sync `strings.txt` with real UI strings usage as found in the codebase.
|
||||
|
||||
There are preliminary plans to refactor translations workflow and migrate to Weblate.
|
||||
[weblate]: https://hosted.weblate.org/projects/organicmaps/
|
||||
[contribute]: https://docs.weblate.org/en/latest/workflows.html
|
||||
[android_weblate]: https://hosted.weblate.org/projects/organicmaps/android/
|
||||
[android_git]: https://github.com/organicmaps/organicmaps/blob/master/android/app/src/main/res/
|
||||
[android_git_en]: https://github.com/organicmaps/organicmaps/blob/master/android/app/src/main/res/values/strings.xml
|
||||
[android_typestrings_weblate]: https://hosted.weblate.org/projects/organicmaps/android-typestrings/
|
||||
[android_typestrings_git_en]: https://github.com/organicmaps/organicmaps/blob/master/android/app/src/main/res/values/types_strings.xml
|
||||
[countries_weblate]: https://hosted.weblate.org/projects/organicmaps/countries/
|
||||
[countries_git]: https://github.com/organicmaps/organicmaps/tree/master/data/countries-strings
|
||||
[countries_git_en]: https://github.com/organicmaps/organicmaps/blob/master/data/countries-strings/en.json/localize.json
|
||||
[ios_weblate]: https://hosted.weblate.org/projects/organicmaps/ios/
|
||||
[ios_git]: https://github.com/organicmaps/organicmaps/blob/master/iphone/Maps/LocalizedStrings/
|
||||
[ios_git_en]: https://github.com/organicmaps/organicmaps/blob/master/iphone/Maps/LocalizedStrings/en.lproj/Localizable.strings
|
||||
[ios_plist_weblate]: https://hosted.weblate.org/projects/organicmaps/ios-plist/
|
||||
[ios_plist_git_en]: https://github.com/organicmaps/organicmaps/blob/master/iphone/Maps/LocalizedStrings/en.lproj/InfoPlist.strings
|
||||
[ios_typestrings_weblate]: https://hosted.weblate.org/projects/organicmaps/ios-typestrings/
|
||||
[ios_typestrings_git_en]: https://github.com/organicmaps/organicmaps/blob/master/iphone/Maps/LocalizedStrings/en.lproj/LocalizableTypes.strings
|
||||
[ios_plurals_weblate]: https://hosted.weblate.org/projects/organicmaps/ios-plurals/
|
||||
[ios_plurals_git_en]: https://github.com/organicmaps/organicmaps/blob/master/iphone/Maps/LocalizedStrings/en.lproj/Localizable.stringsdict
|
||||
[tts_weblate]: https://hosted.weblate.org/projects/organicmaps/tts/
|
||||
[tts_git]: https://github.com/organicmaps/organicmaps/tree/master/data/sound-strings
|
||||
[tts_git_en]: https://github.com/organicmaps/organicmaps/blob/master/data/sound-strings/en.json/localize.json
|
||||
[categories_git]: https://github.com/organicmaps/organicmaps/blob/master/data/categories.txt
|
||||
[categories_cuisines_git]: https://github.com/organicmaps/organicmaps/blob/master/data/categories_cuisines.txt
|
||||
[website_weblate]: https://hosted.weblate.org/projects/organicmaps/website/
|
||||
[website_git]: https://github.com/organicmaps/website/
|
||||
[website_guide]: https://github.com/organicmaps/website/blob/master/TRANSLATIONS.md
|
||||
[appstore_git]: https://github.com/organicmaps/organicmaps/tree/master/iphone/metadata
|
||||
[appstore_git_en]: https://github.com/organicmaps/organicmaps/tree/master/iphone/metadata/en-US
|
||||
[googleplay_git]: https://github.com/organicmaps/organicmaps/tree/master/android/app/src/fdroid/play
|
||||
[googleplay_git_en]: https://github.com/organicmaps/organicmaps/tree/master/android/app/src/fdroid/play/listings/en-US
|
||||
[mergers]: https://github.com/orgs/organicmaps/teams/mergers/members
|
||||
[failing_checks]: https://hosted.weblate.org/search/organicmaps/?q=has%3Acheck+AND+state%3A%3E%3Dtranslated+language%3Aru&sort_by=target&checksum=
|
||||
|
|
|
@ -1,415 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import logging
|
||||
import re
|
||||
import subprocess
|
||||
from argparse import ArgumentParser
|
||||
from collections import defaultdict
|
||||
from itertools import chain
|
||||
from os.path import abspath, isabs
|
||||
|
||||
from strings_utils import StringsTxt
|
||||
|
||||
"""
|
||||
This script determines which strings are used in the platform code (iOS and
|
||||
Android) and removes all the other strings from strings.txt. For more information,
|
||||
run this script with the -h option.
|
||||
"""
|
||||
|
||||
|
||||
OMIM_ROOT = ""
|
||||
|
||||
CORE_RE = re.compile(r'GetLocalizedString\("(.*?)"\)')
|
||||
|
||||
# max 2 matches in L(). Tried to make ()+ group, but no luck ..
|
||||
IOS_RE = re.compile(r'L\(.*?"(\w+)".*?(?:"(\w+)")?\)')
|
||||
IOS_NS_RE = re.compile(r'NSLocalizedString\(\s*?@?"(\w+)"')
|
||||
IOS_XML_RE = re.compile(r'value=\"(.*?)\"')
|
||||
IOS_APPTIPS_RE = re.compile(r'app_tip_\d\d')
|
||||
|
||||
ANDROID_JAVA_RE = re.compile(r'R\.string\.([\w_]*)')
|
||||
ANDROID_JAVA_PLURAL_RE = re.compile(r'R\.plurals\.([\w_]*)')
|
||||
ANDROID_XML_RE = re.compile(r'@string/(.*?)\W')
|
||||
|
||||
IOS_CANDIDATES_RE = re.compile(r'(.*?):[^L\(]@"([a-z0-9_]*?)"')
|
||||
|
||||
HARDCODED_CATEGORIES = []
|
||||
|
||||
HARDCODED_STRINGS = [
|
||||
# titleForBookmarkColor
|
||||
"red", "blue", "purple", "yellow", "pink", "brown", "green", "orange", "deep_purple", "light_blue",
|
||||
"cyan", "teal", "lime", "deep_orange", "gray", "blue_gray",
|
||||
]
|
||||
|
||||
|
||||
def exec_shell(test, *flags):
|
||||
spell = ["{0} {1}".format(test, list(*flags))]
|
||||
|
||||
process = subprocess.Popen(
|
||||
spell,
|
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
|
||||
shell=True
|
||||
)
|
||||
|
||||
logging.info(" ".join(spell))
|
||||
out, _ = process.communicate()
|
||||
return [line for line in out.decode().splitlines() if line]
|
||||
|
||||
|
||||
def grep_ios():
|
||||
logging.info("Grepping iOS...")
|
||||
grep = "grep -r -I 'L(\\|localizedText\\|localizedPlaceholder\\|NSLocalizedString(' {0}/iphone/*".format(
|
||||
OMIM_ROOT)
|
||||
ret = exec_shell(grep)
|
||||
ret = filter_ios_grep(ret)
|
||||
logging.info("Found in iOS: {0}".format(len(ret)))
|
||||
ret.update(get_hardcoded())
|
||||
|
||||
# iOS code scans resources for all available app_tip_XX strings.
|
||||
grep = "grep app_tip_ {0}/data/strings/strings.txt".format(OMIM_ROOT)
|
||||
ret2 = exec_shell(grep)
|
||||
ret.update(parenthesize(strings_from_grepped(ret2, IOS_APPTIPS_RE)))
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def grep_android():
|
||||
logging.info("Grepping android...")
|
||||
grep = "grep -r -I 'R.string.' {0}/android/app/src/main".format(OMIM_ROOT)
|
||||
ret = android_grep_wrapper(grep, ANDROID_JAVA_RE)
|
||||
grep = "grep -r -I 'R.plurals.' {0}/android/app/src/main".format(OMIM_ROOT)
|
||||
ret.update(android_grep_wrapper(grep, ANDROID_JAVA_PLURAL_RE))
|
||||
grep = "grep -r -I '@string/' {0}/android/app/src/main/res".format(OMIM_ROOT)
|
||||
ret.update(android_grep_wrapper(grep, ANDROID_XML_RE))
|
||||
grep = "grep -r -I '@string/' {0}/android/app/src/google/res".format(OMIM_ROOT)
|
||||
ret.update(android_grep_wrapper(grep, ANDROID_XML_RE))
|
||||
grep = "grep -r -I '@string/' {0}/android/app/src/main/AndroidManifest.xml".format(
|
||||
OMIM_ROOT)
|
||||
ret.update(android_grep_wrapper(grep, ANDROID_XML_RE))
|
||||
ret = parenthesize(ret)
|
||||
|
||||
logging.info("Found in android: {0}".format(len(ret)))
|
||||
ret.update(get_hardcoded())
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
def grep_core():
|
||||
logging.info("Grepping core...")
|
||||
grep = "grep -wr -I 'GetLocalizedString' {0}/map {0}/platform".format(OMIM_ROOT)
|
||||
ret = android_grep_wrapper(grep, CORE_RE)
|
||||
logging.info("Found in core: {0}".format(len(ret)))
|
||||
|
||||
return parenthesize(ret)
|
||||
|
||||
|
||||
def grep_ios_candidates():
|
||||
logging.info("Grepping iOS candidates...")
|
||||
grep = "grep -nr -I '@\"' {0}/iphone/*".format(OMIM_ROOT)
|
||||
ret = exec_shell(grep)
|
||||
logging.info("Found in iOS candidates: {0}".format(len(ret)))
|
||||
|
||||
strs = strings_from_grepped(ret, IOS_CANDIDATES_RE)
|
||||
return strs
|
||||
|
||||
|
||||
def get_hardcoded():
|
||||
"search/displayed_categories.cpp"
|
||||
ret = parenthesize(HARDCODED_CATEGORIES)
|
||||
ret.update(parenthesize(HARDCODED_STRINGS))
|
||||
logging.info("Hardcoded colors and categories: {0}".format(len(ret)))
|
||||
return ret
|
||||
|
||||
|
||||
def android_grep_wrapper(grep, regex):
|
||||
grepped = exec_shell(grep)
|
||||
return strings_from_grepped(grepped, regex)
|
||||
|
||||
|
||||
def filter_ios_grep(strings):
|
||||
filtered = strings_from_grepped_tuple(strings, IOS_RE)
|
||||
filtered = parenthesize(process_ternary_operators(filtered))
|
||||
filtered.update(parenthesize(strings_from_grepped(strings, IOS_NS_RE)))
|
||||
filtered.update(parenthesize(strings_from_grepped(strings, IOS_XML_RE)))
|
||||
return filtered
|
||||
|
||||
|
||||
def process_ternary_operators(filtered):
|
||||
return chain(*(s.split('" : @"') for s in filtered))
|
||||
|
||||
|
||||
def strings_from_grepped(grepped, regexp):
|
||||
return set(chain(*(regexp.findall(s) for s in grepped if s)))
|
||||
|
||||
|
||||
def strings_from_grepped_tuple(grepped, regexp):
|
||||
res = set()
|
||||
for e1 in grepped:
|
||||
for e2 in regexp.findall(e1):
|
||||
for e3 in e2:
|
||||
if e3:
|
||||
res.add(e3)
|
||||
return res
|
||||
|
||||
|
||||
def parenthesize(strings):
|
||||
return set("[{}]".format(s) for s in strings)
|
||||
|
||||
|
||||
def write_filtered_strings_txt(filtered, filepath, languages=None):
|
||||
logging.info("Writing strings to file {0}".format(filepath))
|
||||
strings_txt = StringsTxt(
|
||||
"{0}/{1}".format(OMIM_ROOT, StringsTxt.STRINGS_TXT_PATH))
|
||||
strings_dict = {
|
||||
key: dict(strings_txt.translations[key]) for key in filtered}
|
||||
strings_txt.translations = strings_dict
|
||||
strings_txt.comments_tags_refs = {}
|
||||
strings_txt.write_formatted(target_file=filepath, langs=languages)
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = ArgumentParser(
|
||||
description="""
|
||||
A script for cleaning up the strings.txt file. It can cleanup the file
|
||||
inplace, that is all the unused strings will be removed from strings.txt,
|
||||
or it can produce two separate files for ios and android. We can also
|
||||
produce the compiled string resources specifically for each platform
|
||||
that do not contain strings for other platforms or comments."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-v", "--validate",
|
||||
dest="validate",
|
||||
action="store_true",
|
||||
help="""Check for translation definitions/keys which are no longer
|
||||
used in the codebase, exit with error if found"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-s", "--single-file",
|
||||
dest="single",
|
||||
action="store_true",
|
||||
help="""Create single cleaned up file for both platform. All strings
|
||||
that are not used in the project will be thrown away. Otherwise, two
|
||||
platform specific files will be produced."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-l", "--language",
|
||||
dest="langs", default=None,
|
||||
action="append",
|
||||
help="""The languages to be included into the resulting strings.txt
|
||||
file or files. If this param is empty, all languages from the current
|
||||
strings.txt file will be preserved."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-g", "--generate-localizations",
|
||||
dest="generate",
|
||||
action="store_true",
|
||||
help="Generate localizations for the platforms."
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
dest="output", default="data/strings/strings.txt",
|
||||
help="""The name for the resulting file. It will be saved to the
|
||||
project folder. Only relevant if the -s option is set."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-m", "--missing-strings",
|
||||
dest="missing",
|
||||
action="store_true",
|
||||
help="""Find the keys that are used in iOS, but are not translated
|
||||
in strings.txt and exit."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-c", "--candidates",
|
||||
dest="candidates",
|
||||
action="store_true",
|
||||
help="""Find the strings in iOS that are not in the L() macros, but that
|
||||
look like they might be keys."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-r", "--root",
|
||||
dest="omim_root", default=find_omim(),
|
||||
help="Path to the root of the OMIM project"
|
||||
)
|
||||
|
||||
return parser.prog, parser.parse_args()
|
||||
|
||||
|
||||
def do_multiple(args):
|
||||
write_filtered_strings_txt(
|
||||
grep_ios(), "{0}/ios_strings.txt".format(OMIM_ROOT), args.langs
|
||||
)
|
||||
write_filtered_strings_txt(
|
||||
grep_android(), "{0}/android_strings.txt".format(OMIM_ROOT), args.langs
|
||||
)
|
||||
if args.generate:
|
||||
print("Going to generate locs")
|
||||
exec_shell(
|
||||
"{0}/tools/unix/generate_localizations.sh".format(OMIM_ROOT),
|
||||
"android_strings.txt", "ios_strings.txt"
|
||||
)
|
||||
|
||||
|
||||
def generate_auto_tags(ios, android, core):
|
||||
new_tags = defaultdict(set)
|
||||
for i in ios:
|
||||
new_tags[i].add("ios")
|
||||
|
||||
for a in android:
|
||||
new_tags[a].add("android")
|
||||
|
||||
for c in core:
|
||||
new_tags[c].add("ios")
|
||||
new_tags[c].add("android")
|
||||
|
||||
return new_tags
|
||||
|
||||
|
||||
def new_comments_and_tags(strings_txt, filtered, new_tags):
|
||||
comments_and_tags = {
|
||||
key: strings_txt.comments_tags_refs[key] for key in filtered}
|
||||
for key in comments_and_tags:
|
||||
comments_and_tags[key]["tags"] = ",".join(sorted(new_tags[key]))
|
||||
return comments_and_tags
|
||||
|
||||
|
||||
def do_single(args):
|
||||
core = grep_core()
|
||||
ios = grep_ios()
|
||||
android = grep_android()
|
||||
|
||||
new_tags = generate_auto_tags(ios, android, core)
|
||||
|
||||
filtered = ios
|
||||
filtered.update(android)
|
||||
filtered.update(core)
|
||||
n_android = sum([1 for tags in new_tags.values() if "android" in tags])
|
||||
n_ios = sum([1 for tags in new_tags.values() if "ios" in tags])
|
||||
|
||||
logging.info("Total strings grepped: {0}\tiOS: {1}\tandroid: {2}".format(
|
||||
len(filtered), n_android, n_ios))
|
||||
|
||||
strings_txt = StringsTxt(
|
||||
"{0}/{1}".format(OMIM_ROOT, StringsTxt.STRINGS_TXT_PATH))
|
||||
logging.info("Total strings in strings.txt: {0}".format(
|
||||
len(strings_txt.translations)))
|
||||
|
||||
strings_txt.translations = {
|
||||
key: dict(strings_txt.translations[key]) for key in filtered}
|
||||
|
||||
strings_txt.comments_tags_refs = new_comments_and_tags(
|
||||
strings_txt, filtered, new_tags)
|
||||
|
||||
path = args.output if isabs(
|
||||
args.output) else "{0}/{1}".format(OMIM_ROOT, args.output)
|
||||
strings_txt.write_formatted(target_file=path, langs=args.langs)
|
||||
|
||||
if args.generate:
|
||||
exec_shell(
|
||||
"{}/unix/generate_localizations.sh".format(OMIM_ROOT),
|
||||
args.output, args.output
|
||||
)
|
||||
|
||||
|
||||
def find_unused():
|
||||
core = grep_core()
|
||||
ios = grep_ios()
|
||||
android = grep_android()
|
||||
|
||||
filtered = ios
|
||||
filtered.update(android)
|
||||
filtered.update(core)
|
||||
|
||||
strings_txt = StringsTxt(
|
||||
"{0}/{1}".format(OMIM_ROOT, StringsTxt.STRINGS_TXT_PATH))
|
||||
unused = set(strings_txt.translations.keys()) - filtered
|
||||
if len(unused):
|
||||
print("Translation definitions/keys which are no longer used in the codebase:")
|
||||
print(*unused, sep="\n")
|
||||
else:
|
||||
print("There are no unused translation definitions/keys.")
|
||||
return len(unused)
|
||||
|
||||
|
||||
def do_missing(args):
|
||||
ios = set(grep_ios())
|
||||
strings_txt_keys = set(StringsTxt().translations.keys())
|
||||
missing = ios - strings_txt_keys
|
||||
|
||||
if missing:
|
||||
for m in missing:
|
||||
logging.info(m)
|
||||
exit(1)
|
||||
logging.info("Ok. No missing strings.")
|
||||
exit(0)
|
||||
|
||||
|
||||
def do_candidates(args):
|
||||
all_candidates = defaultdict(list)
|
||||
for source, candidate in grep_ios_candidates():
|
||||
all_candidates[candidate].append(source)
|
||||
|
||||
for candidate, sources in all_candidates.items():
|
||||
print(candidate, sources)
|
||||
|
||||
|
||||
def do_ios_suspects(args):
|
||||
grep = "grep -re -I 'L(' {}/iphone/*".format(OMIM_ROOT)
|
||||
suspects = exec_shell(grep)
|
||||
SUSPECT_RE = re.compile(r"(.*?):.*?\WL\(([^@].*?)\)")
|
||||
strings = strings_from_grepped(suspects, SUSPECT_RE)
|
||||
for s in strings:
|
||||
print(s)
|
||||
|
||||
|
||||
def find_omim():
|
||||
my_path = abspath(__file__)
|
||||
tools_index = my_path.rfind("/tools/python")
|
||||
omim_path = my_path[:tools_index]
|
||||
return omim_path
|
||||
|
||||
|
||||
def read_hardcoded_categories():
|
||||
categoriestxt = OMIM_ROOT + "/data/categories.txt"
|
||||
logging.info(f"Retrieving search categories from: {categoriestxt}")
|
||||
with open(categoriestxt) as infile:
|
||||
return [s.strip().lstrip('@') for s in infile if s.startswith("@category_")]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
prog_name, args = get_args()
|
||||
|
||||
OMIM_ROOT = args.omim_root
|
||||
|
||||
HARDCODED_CATEGORIES = read_hardcoded_categories()
|
||||
logging.info(f"Loaded categories: {HARDCODED_CATEGORIES}")
|
||||
|
||||
args.langs = set(args.langs) if args.langs else None
|
||||
|
||||
if args.validate:
|
||||
if find_unused():
|
||||
print(
|
||||
"ERROR: there are unused strings\n(run \"{0} -s\" to delete them)\nTerminating...".format(prog_name))
|
||||
exit(1)
|
||||
exit(0)
|
||||
|
||||
if args.missing:
|
||||
do_missing(args)
|
||||
exit(0)
|
||||
|
||||
if args.candidates:
|
||||
do_candidates(args)
|
||||
exit(0)
|
||||
|
||||
if args.single:
|
||||
do_single(args)
|
||||
else:
|
||||
do_multiple(args)
|
|
@ -1,112 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import csv
|
||||
import sys
|
||||
|
||||
|
||||
def langs_order(lang):
|
||||
if lang == 'en':
|
||||
return '0'
|
||||
return lang
|
||||
|
||||
|
||||
def read_strings(fin):
|
||||
curtitle = None
|
||||
curtrans = {}
|
||||
for line in filter(None, map(str.strip, fin)):
|
||||
if line[0].startswith('['):
|
||||
if curtrans:
|
||||
yield curtitle, curtrans
|
||||
curtitle = line.strip('[ ]')
|
||||
curtrans = {}
|
||||
elif '=' in line and curtitle:
|
||||
lang, trans = (x.strip() for x in line.split('='))
|
||||
curtrans[lang] = trans
|
||||
if curtrans:
|
||||
yield curtitle, curtrans
|
||||
|
||||
|
||||
def from_csv(fin, fout, delim):
|
||||
r = csv.reader(fin, delimiter=delim)
|
||||
header = next(r)
|
||||
for row in r:
|
||||
fout.write('[{}]\n'.format(row[0]))
|
||||
for i, col in enumerate(map(str.strip, row)):
|
||||
if len(col) > 0 and i > 0:
|
||||
fout.write('{} = {}\n'.format(header[i], col))
|
||||
fout.write('\n')
|
||||
|
||||
|
||||
def to_csv(fin, fout, delim, langs):
|
||||
def write_line(writer, title, translations, langs):
|
||||
row = [title]
|
||||
for lang in langs:
|
||||
row.append('' if lang not in translations else translations[lang])
|
||||
writer.writerow(row)
|
||||
|
||||
w = csv.writer(fout, delimiter=delim)
|
||||
if langs is not None:
|
||||
w.writerow(['Key'] + langs)
|
||||
|
||||
strings = []
|
||||
for title, trans in read_strings(fin):
|
||||
if langs is None:
|
||||
strings.append((title, trans))
|
||||
else:
|
||||
write_line(w, title, trans, langs)
|
||||
|
||||
# If we don't have langs, build a list and print
|
||||
if langs is None:
|
||||
langs = set()
|
||||
for s in strings:
|
||||
langs.update(list(s[1].values()))
|
||||
|
||||
langs = sorted(langs, key=langs_order)
|
||||
w.writerow(['Key'] + langs)
|
||||
for s in strings:
|
||||
write_line(w, s[0], s[1], langs)
|
||||
|
||||
|
||||
def from_categories(fin, fout):
|
||||
raise Exception('This conversion has not been implemented yet.')
|
||||
|
||||
|
||||
def to_categories(fin, fout):
|
||||
for title, trans in read_strings(fin):
|
||||
fout.write('{}\n'.format(title))
|
||||
for lang in sorted(trans.keys(), key=langs_order):
|
||||
fout.write('{}:^{}\n'.format(lang, trans[lang]))
|
||||
fout.write('\n')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='Converts between strings.txt, csv files and categories.txt.')
|
||||
parser.add_argument('input', type=argparse.FileType('r'), help='Input file')
|
||||
parser.add_argument('-o', '--output', default='-', help='Output file, "-" for stdout')
|
||||
parser.add_argument('-d', '--delimiter', default=',', help='CSV delimiter')
|
||||
parser.add_argument('-l', '--langs', help='List of langs for csv: empty for default, "?" to autodetect, comma-separated for a list')
|
||||
parser.add_argument('--csv2s', action='store_true', help='CSV -> TXT')
|
||||
parser.add_argument('--s2csv', action='store_true', help='TXT -> CSV')
|
||||
parser.add_argument('--cat2s', action='store_true', help='Categories -> TXT')
|
||||
parser.add_argument('--s2cat', action='store_true', help='TXT -> Categories')
|
||||
options = parser.parse_args()
|
||||
|
||||
fout = sys.stdout if options.output == '-' else open(options.output, 'w')
|
||||
|
||||
if not options.langs:
|
||||
langs = 'en en-AU en-GB en-UK ar be cs da de es es-MX eu he nl fi fr hu id it ja ko nb pl pt pt-BR ro ru sk sv th tr uk vi zh-Hans zh-Hant'.split()
|
||||
elif options.langs == '?':
|
||||
langs = None
|
||||
else:
|
||||
langs = options.langs.split(',')
|
||||
|
||||
if options.csv2s:
|
||||
from_csv(options.input, fout, options.delimiter)
|
||||
elif options.s2csv:
|
||||
to_csv(options.input, fout, options.delimiter, langs)
|
||||
elif options.cat2s:
|
||||
from_categories(options.input, fout)
|
||||
elif options.s2cat:
|
||||
to_categories(options.input, fout)
|
||||
else:
|
||||
raise ValueError('Please select a conversion direction.')
|
|
@ -1,41 +0,0 @@
|
|||
from __future__ import print_function
|
||||
import csv
|
||||
from collections import defaultdict
|
||||
import sys
|
||||
|
||||
if len(sys.argv) <= 1:
|
||||
print("""
|
||||
* * *
|
||||
|
||||
This script turns a csv file resulting from "translated strings" in the google sheet file into a strings.txt-formated file.
|
||||
|
||||
To use this script, create the translated strings using the google spread-sheet. Go to file -> Download as, and choose csv. Only the currently open sheet will be exported.
|
||||
Run this script with the path to the downloaded file as a parameter. The formatted file will be printed to the console.
|
||||
Please note, that the order of keys is not (yet) preserved.
|
||||
* * *
|
||||
""")
|
||||
|
||||
exit(2)
|
||||
|
||||
path = sys.argv[1]
|
||||
resulting_dict = defaultdict(list)
|
||||
|
||||
with open(path, mode='r') as infile:
|
||||
reader = csv.reader(infile)
|
||||
column_names = next(reader)
|
||||
|
||||
for strings in reader:
|
||||
for i, string in enumerate(strings):
|
||||
if string:
|
||||
resulting_dict[column_names[i]].append(string)
|
||||
|
||||
for key in column_names:
|
||||
if not key:
|
||||
continue
|
||||
|
||||
translations = resulting_dict[key]
|
||||
print(" {}".format(key))
|
||||
for translation in translations:
|
||||
print(" {}".format(translation))
|
||||
|
||||
print("")
|
|
@ -1,667 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
from argparse import ArgumentParser
|
||||
from collections import namedtuple, defaultdict
|
||||
from itertools import combinations
|
||||
from os.path import join, dirname, abspath, isabs
|
||||
import re
|
||||
from sys import argv
|
||||
|
||||
|
||||
class StringsTxt:
|
||||
|
||||
STRINGS_TXT_PATH = "data/strings/strings.txt"
|
||||
TYPES_STRINGS_TXT_PATH = "data/strings/types_strings.txt"
|
||||
|
||||
SECTION = re.compile(r"\[\[\w+.*\]\]")
|
||||
DEFINITION = re.compile(r"\[\w+.*\]")
|
||||
LANG_KEY = re.compile(r"^[a-z]{2}(-[a-zA-Z]{2,4})?(:[a-z]+)?$")
|
||||
TRANSLATION = re.compile(r"^\s*\S+\s*=\s*\S+.*$", re.S | re.MULTILINE)
|
||||
MANY_DOTS = re.compile(r"\.{4,}")
|
||||
SPACE_PUNCTUATION = re.compile(r"\s[.,?!:;]")
|
||||
PLACEHOLDERS = re.compile(r"(%\d*\$@|%[@dqus]|\^)")
|
||||
|
||||
PLURAL_KEYS = frozenset(("zero", "one", "two", "few", "many", "other"))
|
||||
SIMILARITY_THRESHOLD = 20.0 # %
|
||||
|
||||
TransAndKey = namedtuple("TransAndKey", "translation, key")
|
||||
|
||||
def __init__(self, strings_path):
|
||||
self.strings_path = strings_path
|
||||
|
||||
# dict<key, dict<lang, translation>>
|
||||
self.translations = defaultdict(lambda: defaultdict(str))
|
||||
self.translations_by_language = defaultdict(
|
||||
dict) # dict<lang, dict<key, translation>>
|
||||
self.comments_tags_refs = defaultdict(
|
||||
dict) # dict<key, dict<key, value>>
|
||||
self.all_langs = set() # including plural keys, e.g. en:few
|
||||
self.langs = set() # without plural keys
|
||||
self.duplicates = {} # dict<lang, TransAndKey>
|
||||
self.keys_in_order = []
|
||||
self.validation_errors = False
|
||||
|
||||
self._read_file()
|
||||
|
||||
def process_file(self):
|
||||
self._resolve_references()
|
||||
self._populate_translations_by_langs()
|
||||
self._find_duplicates()
|
||||
self.most_duplicated = []
|
||||
self._find_most_duplicated()
|
||||
self.similarity_indices = []
|
||||
self._find_most_similar()
|
||||
|
||||
def add_translation(self, translation, key, lang):
|
||||
if key not in self.keys_in_order:
|
||||
self.keys_in_order.append(key)
|
||||
self.translations[key][lang] = translation
|
||||
self.all_langs.add(lang)
|
||||
lang, plural_key = self._parse_lang(lang)
|
||||
self.langs.add(lang)
|
||||
|
||||
def append_to_translation(self, key, lang, tail):
|
||||
self.translations[key][lang] = self.translations[key][lang] + tail
|
||||
|
||||
def _read_file(self):
|
||||
with open(self.strings_path, encoding='utf-8') as strings:
|
||||
for line in strings:
|
||||
line = line.strip()
|
||||
if not line:
|
||||
continue
|
||||
|
||||
if self.SECTION.match(line):
|
||||
self.keys_in_order.append(line)
|
||||
continue
|
||||
|
||||
if self.DEFINITION.match(line):
|
||||
if line in self.translations:
|
||||
self._print_validation_issue(
|
||||
"Duplicate definition: {0}".format(line))
|
||||
self.translations[line] = {}
|
||||
current_key = line
|
||||
if current_key not in self.keys_in_order:
|
||||
self.keys_in_order.append(current_key)
|
||||
continue
|
||||
|
||||
if self.TRANSLATION.match(line):
|
||||
lang, tran = self._parse_lang_and_translation(line)
|
||||
|
||||
if lang == "comment" or lang == "tags" or lang == "ref":
|
||||
self.comments_tags_refs[current_key][lang] = tran
|
||||
continue
|
||||
|
||||
self.translations[current_key][lang] = tran
|
||||
|
||||
self.all_langs.add(lang)
|
||||
lang, plural_key = self._parse_lang(lang)
|
||||
self.langs.add(lang)
|
||||
|
||||
else:
|
||||
self._print_validation_issue(
|
||||
"Couldn't parse line: {0}".format(line))
|
||||
|
||||
def print_languages_stats(self, langs=None):
|
||||
self._print_header("Languages statistics")
|
||||
print("All languages in the file ({0} total):\n{1}\n".format(
|
||||
len(self.langs), ",".join(sorted(self.langs)))
|
||||
)
|
||||
print("Regional languages:\n{0}\n".format(
|
||||
",".join([lang for lang in sorted(self.langs) if len(lang) > 2]))
|
||||
)
|
||||
print("Languages using plurals:\n{0}\n".format(
|
||||
",".join([lang for lang in sorted(self.all_langs) if lang.find(":") > -1]))
|
||||
)
|
||||
|
||||
self.print_invalid_languages()
|
||||
|
||||
print_plurals = True
|
||||
if not langs:
|
||||
print_plurals = False
|
||||
langs = self.langs
|
||||
|
||||
langs_stats = []
|
||||
plurals_stats = defaultdict(dict) # dict<lang, dict<plural, int>>
|
||||
for lang in langs:
|
||||
lang_defs = set()
|
||||
if lang in self.translations_by_language:
|
||||
lang_defs = set(self.translations_by_language[lang].keys())
|
||||
plurals_stats[lang][lang] = len(lang_defs)
|
||||
for plural_key in self.PLURAL_KEYS:
|
||||
lang_plural = "{0}:{1}".format(lang, plural_key)
|
||||
if lang_plural in self.translations_by_language:
|
||||
plural_defs = set(
|
||||
self.translations_by_language[lang_plural].keys())
|
||||
plurals_stats[lang][lang_plural] = len(plural_defs)
|
||||
lang_defs = lang_defs.union(plural_defs)
|
||||
langs_stats.append((lang, len(lang_defs)))
|
||||
|
||||
print("\nNumber of translations out of total:\n")
|
||||
|
||||
langs_stats.sort(key=lambda x: x[1], reverse=True)
|
||||
|
||||
n_trans = len(self.translations)
|
||||
for lang, lang_stat in langs_stats:
|
||||
print("{0:7} : {1} / {2} ({3}%)".format(
|
||||
lang, lang_stat, n_trans, round(100 * lang_stat / n_trans)
|
||||
))
|
||||
if print_plurals and not (len(plurals_stats[lang]) == 1 and lang in plurals_stats[lang]):
|
||||
for lang_plural, plural_stat in plurals_stats[lang].items():
|
||||
print(" {0:13} : {1}".format(lang_plural, plural_stat))
|
||||
|
||||
def print_invalid_languages(self):
|
||||
invalid_langs = []
|
||||
invalid_plurals = []
|
||||
for lang in self.all_langs:
|
||||
if not self.LANG_KEY.match(lang):
|
||||
invalid_langs.append(lang)
|
||||
lang_key, plural_key = self._parse_lang(lang)
|
||||
if plural_key and plural_key not in self.PLURAL_KEYS:
|
||||
invalid_plurals.append(lang)
|
||||
|
||||
if invalid_langs:
|
||||
self._print_validation_issue("Invalid languages: {0}".format(
|
||||
",".join(sorted(invalid_langs))
|
||||
))
|
||||
|
||||
if invalid_plurals:
|
||||
self._print_validation_issue("Invalid plurals: {0}".format(
|
||||
",".join(sorted(invalid_plurals))
|
||||
))
|
||||
|
||||
def print_definitions_stats(self, langs=None):
|
||||
self._print_header("Definitions stats")
|
||||
print("Number of translations out of total:\n")
|
||||
if not langs:
|
||||
langs = self.langs
|
||||
def_stats = []
|
||||
for definition in self.translations.keys():
|
||||
def_langs = set()
|
||||
for def_lang in self.translations[definition].keys():
|
||||
def_lang, plural_key = self._parse_lang(def_lang)
|
||||
if def_lang in langs:
|
||||
def_langs.add(def_lang)
|
||||
def_stats.append((definition, len(def_langs)))
|
||||
def_stats.sort(key=lambda x: x[1], reverse=True)
|
||||
|
||||
n_langs = len(langs)
|
||||
for definition, n_trans in def_stats:
|
||||
print("{0}\t{1} / {2} ({3}%)".format(
|
||||
definition, n_trans, n_langs, round(100 * n_trans / n_langs)
|
||||
))
|
||||
|
||||
def print_duplicates(self, langs=None):
|
||||
self._print_header("Duplicate translations")
|
||||
print("Same translations used in several definitions:")
|
||||
langs = self._expand_plurals(langs) if langs else self.all_langs
|
||||
dups = list(self.duplicates.items())
|
||||
dups.sort(key=lambda x: x[0])
|
||||
for lang, trans_and_keys in dups:
|
||||
if lang not in langs:
|
||||
continue
|
||||
print("\nLanguage: {0}".format(lang))
|
||||
last_one = ""
|
||||
keys = []
|
||||
for tr in trans_and_keys:
|
||||
if last_one != tr.translation:
|
||||
self._print_keys_for_duplicates(keys, last_one)
|
||||
keys = []
|
||||
last_one = tr.translation
|
||||
keys.append(tr.key)
|
||||
self._print_keys_for_duplicates(keys, last_one)
|
||||
|
||||
def _print_keys_for_duplicates(self, keys, last_one):
|
||||
if last_one:
|
||||
print("\t{0}: {1}".format(",".join(keys), last_one))
|
||||
|
||||
def _expand_plurals(self, langs):
|
||||
expanded_langs = set()
|
||||
for lang_plural in self.all_langs:
|
||||
lang, plural_key = self._parse_lang(lang_plural)
|
||||
if lang in langs:
|
||||
expanded_langs.add(lang_plural)
|
||||
return expanded_langs
|
||||
|
||||
def _parse_lang(self, lang):
|
||||
plural_key = None
|
||||
sep_pos = lang.find(":")
|
||||
if sep_pos > -1:
|
||||
lang, plural_key = lang.split(":")
|
||||
return lang, plural_key
|
||||
|
||||
def _parse_lang_and_translation(self, line):
|
||||
lang, trans = line.split("=", 1)
|
||||
if self.MANY_DOTS.search(trans):
|
||||
self._print_validation_issue(
|
||||
"4 or more dots in the string: {0}".format(line), warning=True)
|
||||
return (lang.strip(), trans.strip())
|
||||
|
||||
def _resolve_references(self):
|
||||
resolved = set()
|
||||
for definition in list(self.comments_tags_refs.keys()):
|
||||
visited = set()
|
||||
self._resolve_ref(definition, visited, resolved)
|
||||
|
||||
def _resolve_ref(self, definition, visited, resolved):
|
||||
visited.add(definition)
|
||||
ref = self.comments_tags_refs[definition].get("ref")
|
||||
if definition not in resolved and ref:
|
||||
ref = "[{0}]".format(ref)
|
||||
if ref not in self.translations:
|
||||
self._print_validation_issue("Couldn't find reference: {0}".format(self.comments_tags_refs[definition]["ref"]))
|
||||
resolved.add(definition)
|
||||
return
|
||||
if ref in visited:
|
||||
self._print_validation_issue("Circular reference: {0} in {1}".format(self.comments_tags_refs[definition]["ref"], visited))
|
||||
else:
|
||||
# resolve nested refs recursively
|
||||
self._resolve_ref(ref, visited, resolved)
|
||||
for lang, trans in self.translations[ref].items():
|
||||
if lang not in self.translations[definition]:
|
||||
self.translations[definition][lang] = trans
|
||||
resolved.add(definition)
|
||||
|
||||
def _populate_translations_by_langs(self):
|
||||
for lang in self.all_langs:
|
||||
trans_for_lang = {}
|
||||
for key, tran in self.translations.items(): # (tran = dict<lang, translation>)
|
||||
if lang not in tran:
|
||||
continue
|
||||
trans_for_lang[key] = tran[lang]
|
||||
self.translations_by_language[lang] = trans_for_lang
|
||||
|
||||
def _find_duplicates(self):
|
||||
for lang, tran in self.translations_by_language.items():
|
||||
trans_for_lang = [self.TransAndKey(
|
||||
x[1], x[0]) for x in tran.items()]
|
||||
trans_for_lang.sort(key=lambda x: x.translation)
|
||||
prev_tran = self.TransAndKey("", "")
|
||||
possible_duplicates = set()
|
||||
for curr_tran in trans_for_lang:
|
||||
if curr_tran.translation == prev_tran.translation:
|
||||
possible_duplicates.add(prev_tran)
|
||||
possible_duplicates.add(curr_tran)
|
||||
else:
|
||||
prev_tran = curr_tran
|
||||
|
||||
self.duplicates[lang] = sorted(list(possible_duplicates))
|
||||
|
||||
def _find_most_duplicated(self):
|
||||
most_duplicated = defaultdict(int)
|
||||
for trans_and_keys in self.duplicates.values():
|
||||
for trans_and_key in trans_and_keys:
|
||||
most_duplicated[trans_and_key.key] += 1
|
||||
|
||||
self.most_duplicated = sorted(
|
||||
most_duplicated.items(), key=lambda x: x[1], reverse=True)
|
||||
|
||||
def print_most_duplicated(self):
|
||||
self._print_header("Most duplicated")
|
||||
print("Definitions with the most translations shared with other definitions:\n")
|
||||
for pair in self.most_duplicated:
|
||||
print("{}\t{}".format(pair[0], pair[1]))
|
||||
|
||||
def print_missing_translations(self, langs=None):
|
||||
self._print_header("Untranslated definitions")
|
||||
if not langs:
|
||||
langs = sorted(self.langs)
|
||||
all_translation_keys = set(self.translations.keys())
|
||||
for lang in langs:
|
||||
keys_for_lang = set(self.translations_by_language[lang].keys())
|
||||
missing_keys = all_translation_keys - keys_for_lang
|
||||
for plural_key in self.PLURAL_KEYS:
|
||||
lang_plural = "{0}:{1}".format(lang, plural_key)
|
||||
if lang_plural in self.translations_by_language:
|
||||
missing_keys -= set(
|
||||
self.translations_by_language[lang_plural].keys())
|
||||
missing_keys = sorted(missing_keys)
|
||||
print("Language: {0} ({1} missing)\n\t{2}\n".format(
|
||||
lang, len(missing_keys), "\n\t".join(missing_keys)))
|
||||
|
||||
def write_formatted(self, target_file=None, langs=None, keep_resolved=False):
|
||||
before_block = ""
|
||||
langs = self._expand_plurals(langs) if langs else self.all_langs
|
||||
en_langs = []
|
||||
other_langs = []
|
||||
for lang in langs:
|
||||
if lang.startswith("en"):
|
||||
en_langs.append(lang)
|
||||
else:
|
||||
other_langs.append(lang)
|
||||
sorted_langs = sorted(en_langs) + sorted(other_langs)
|
||||
|
||||
if target_file is None:
|
||||
target_file = self.strings_path
|
||||
if target_file.endswith('countries_names.txt'):
|
||||
section_padding = 0 * " "
|
||||
key_padding = 2 * " "
|
||||
else:
|
||||
section_padding = 2 * " "
|
||||
key_padding = 4 * " "
|
||||
with open(target_file, "w", encoding='utf-8') as outfile:
|
||||
for key in self.keys_in_order:
|
||||
# TODO: sort definitions and sections too?
|
||||
if not key:
|
||||
continue
|
||||
if key in self.translations:
|
||||
tran = self.translations[key]
|
||||
else:
|
||||
if key.startswith("[["):
|
||||
outfile.write("{0}{1}\n".format(before_block, key))
|
||||
before_block = "\n"
|
||||
continue
|
||||
|
||||
outfile.write("{0}{1}{2}\n".format(before_block, section_padding, key))
|
||||
before_block = "\n"
|
||||
|
||||
ref_tran = {}
|
||||
if key in self.comments_tags_refs:
|
||||
for k, v in self.comments_tags_refs[key].items():
|
||||
outfile.write("{0}{1} = {2}\n".format(key_padding, k, v))
|
||||
if not keep_resolved and k == "ref":
|
||||
ref_tran = self.translations.get("[{0}]".format(v))
|
||||
|
||||
self._write_translations_for_langs(outfile, sorted_langs, tran, ref_tran, key_padding)
|
||||
|
||||
def _write_translations_for_langs(self, outfile, langs, tran, ref_tran, key_padding):
|
||||
for lang in langs:
|
||||
# don't output translation if it's duplicated in referenced definition
|
||||
if lang in tran and tran[lang] != ref_tran.get(lang):
|
||||
outfile.write("{0}{1} = {2}\n".format(
|
||||
key_padding, lang, tran[lang].replace("...", "…")
|
||||
))
|
||||
|
||||
def _compare_blocks(self, key_1, key_2):
|
||||
block_1 = self.translations[key_1]
|
||||
block_2 = self.translations[key_2]
|
||||
|
||||
common_keys = set(block_1.keys()).intersection(set(block_2))
|
||||
|
||||
common_elements = 0
|
||||
for key in common_keys:
|
||||
if block_1[key] == block_2[key]:
|
||||
common_elements += 1
|
||||
|
||||
sim_index = round(100 * 2 * common_elements /
|
||||
(len(block_1) + len(block_2)))
|
||||
if sim_index >= self.SIMILARITY_THRESHOLD:
|
||||
return [("{} <-> {}".format(key_1, key_2), sim_index)]
|
||||
return []
|
||||
|
||||
def _find_most_similar(self):
|
||||
search_scope = [x for x in self.most_duplicated if x[1]
|
||||
> len(self.translations[x[0]]) / 10]
|
||||
for one, two in combinations(search_scope, 2):
|
||||
self.similarity_indices.extend(
|
||||
self._compare_blocks(one[0], two[0]))
|
||||
|
||||
self.similarity_indices.sort(key=lambda x: x[1], reverse=True)
|
||||
|
||||
def print_most_similar(self):
|
||||
self._print_header("Most similar definitions")
|
||||
print("Definitions most similar to other definitions, i.e. with a lot of same translations:\n")
|
||||
for index in self.similarity_indices:
|
||||
print("{} : {}%".format(index[0], index[1]))
|
||||
|
||||
def _print_header(self, string):
|
||||
# print headers in green colour
|
||||
print("\n{line} \033[0;32m{str}\033[0m {line}\n".format(
|
||||
line="=" * round((70 - len(string)) / 2),
|
||||
str=string
|
||||
))
|
||||
|
||||
def _print_validation_issue(self, issue, warning=False):
|
||||
if warning:
|
||||
# print warnings in yellow colour
|
||||
print("\033[0;33mWARNING: {0}\033[0m".format(issue))
|
||||
return
|
||||
self.validation_errors = True
|
||||
# print errors in red colour
|
||||
print("\033[0;31mERROR: {0}\033[0m".format(issue))
|
||||
|
||||
def _has_space_before_punctuation(self, lang, string):
|
||||
if lang == "fr": # make exception for French
|
||||
return False
|
||||
if self.SPACE_PUNCTUATION.search(string):
|
||||
return True
|
||||
return False
|
||||
|
||||
def print_strings_with_spaces_before_punctuation(self, langs=None):
|
||||
self._print_header("Strings with spaces before punctuation")
|
||||
langs = self._expand_plurals(langs) if langs else self.all_langs
|
||||
for key, lang_and_trans in self.translations.items():
|
||||
wrote_key = False
|
||||
for lang, translation in lang_and_trans.items():
|
||||
if lang in langs:
|
||||
if self._has_space_before_punctuation(lang, translation):
|
||||
if not wrote_key:
|
||||
print("\n{}".format(key))
|
||||
wrote_key = True
|
||||
self._print_validation_issue(
|
||||
"{0} : {1}".format(lang, translation), warning=True)
|
||||
|
||||
def _check_placeholders_in_block(self, block_key, langs):
|
||||
wrong_placeholders_strings = []
|
||||
en_lang = "en"
|
||||
en_trans = self.translations[block_key].get(en_lang)
|
||||
if not en_trans:
|
||||
for plural_key in sorted(self.PLURAL_KEYS):
|
||||
if en_trans:
|
||||
break
|
||||
en_lang = "en:{0}".format(plural_key)
|
||||
en_trans = self.translations[block_key].get(en_lang)
|
||||
if not en_trans:
|
||||
self._print_validation_issue(
|
||||
"No English for definition: {}".format(block_key))
|
||||
return None, wrong_placeholders_strings
|
||||
|
||||
en_placeholders = sorted(self.PLACEHOLDERS.findall(en_trans))
|
||||
|
||||
for lang, translation in self.translations[block_key].items():
|
||||
found = sorted(self.PLACEHOLDERS.findall(translation))
|
||||
if not en_placeholders == found: # must be sorted
|
||||
wrong_placeholders_strings.append(
|
||||
"{} = {}".format(lang, translation))
|
||||
|
||||
return en_lang, wrong_placeholders_strings
|
||||
|
||||
def print_strings_with_wrong_placeholders(self, langs=None):
|
||||
self._print_header("Strings with a wrong number of placeholders")
|
||||
langs = self._expand_plurals(langs) if langs else self.all_langs
|
||||
for key, lang_and_trans in self.translations.items():
|
||||
en_lang, wrong_placeholders = self._check_placeholders_in_block(
|
||||
key, langs)
|
||||
if not wrong_placeholders:
|
||||
continue
|
||||
|
||||
print("\n{0}".format(key))
|
||||
print("{0} = {1}".format(en_lang, lang_and_trans[en_lang]))
|
||||
for wp in wrong_placeholders:
|
||||
self._print_validation_issue(wp)
|
||||
|
||||
def validate(self, langs=None):
|
||||
self._print_header("Validating the file...")
|
||||
if self.validation_errors:
|
||||
self._print_validation_issue(
|
||||
"There were errors reading the file, check the output above")
|
||||
self._print_header("Invalid languages")
|
||||
self.print_invalid_languages()
|
||||
self.print_strings_with_spaces_before_punctuation(langs=args.langs)
|
||||
self.print_strings_with_wrong_placeholders(langs=args.langs)
|
||||
return not self.validation_errors
|
||||
|
||||
def translate(self, source_language, target_language):
|
||||
from translate import translate_one
|
||||
self._print_header(f"Translating from {source_language} to {target_language}...")
|
||||
for key, source in self.translations_by_language[source_language].items():
|
||||
if key in self.translations_by_language[target_language]:
|
||||
continue
|
||||
translation = translate_one(source, source_language, target_language)
|
||||
print(f'{source} -> {translation}')
|
||||
self.add_translation(translation, key, target_language)
|
||||
|
||||
def find_project_root():
|
||||
my_path = abspath(__file__)
|
||||
tools_index = my_path.rfind("/tools/python")
|
||||
project_root = my_path[:tools_index]
|
||||
return project_root
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = ArgumentParser(
|
||||
description="""
|
||||
Validates and formats translation files (strings.txt, types_strings.txt),
|
||||
prints file's statistics, finds duplicate and missing translations, etc."""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"input",
|
||||
nargs="?", default=None,
|
||||
help="input file path, defaults to <organicmaps>/data/strings/strings.txt"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-t", "--types-strings",
|
||||
action="store_true",
|
||||
help="use <organicmaps>/data/strings/types_strings.txt as input file by default"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-o", "--output",
|
||||
default=None, nargs="?", const=True,
|
||||
help="""path to write formatted output file to with languages
|
||||
sorted in alphabetic order except English translations going first
|
||||
(overwrites the input file by default)"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--keep-resolved-references",
|
||||
dest="keep_resolved",
|
||||
action="store_true",
|
||||
help="""keep resolved translation references when writing output file;
|
||||
used with --output only"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-l", "--languages",
|
||||
dest="langs", default=None,
|
||||
help="a comma-separated list of languages to limit output to, if applicable"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-pl", "--print-languages",
|
||||
dest="print_langs",
|
||||
action="store_true",
|
||||
help="print languages statistics"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-pf", "--print-definitions",
|
||||
dest="print_defs",
|
||||
action="store_true",
|
||||
help="print definitions stattistics"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-pd", "--print-duplicates",
|
||||
dest="print_dups",
|
||||
action="store_true",
|
||||
help="print same translations used in several definitions"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-po", "--print-most-duplicated",
|
||||
dest="print_mdups",
|
||||
action="store_true",
|
||||
help="""print definitions with the most translations shared
|
||||
with other definitions"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-ps", "--print-similar",
|
||||
dest="print_similar",
|
||||
action="store_true",
|
||||
help="""print definitions most similar to other definitions,
|
||||
i.e. with a lot of same translations"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-pm", "--missing-translations",
|
||||
dest="print_missing",
|
||||
action="store_true",
|
||||
help="print untranslated definitions"
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-v", "--validate",
|
||||
dest="validate",
|
||||
action="store_true",
|
||||
help="""validate file format, placeholders usage, whitespace
|
||||
before punctuation, etc; exit with error if not valid"""
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-tr", "--translate",
|
||||
nargs=2,
|
||||
dest="translate",
|
||||
metavar=('source_lang', 'target_lang'),
|
||||
help="""translate SOURCE_LANG into TARGET_LANG"""
|
||||
)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import sys
|
||||
|
||||
args = get_args()
|
||||
|
||||
if not args.input:
|
||||
args.input = StringsTxt.TYPES_STRINGS_TXT_PATH if args.types_strings else StringsTxt.STRINGS_TXT_PATH
|
||||
args.input = "{0}/{1}".format(find_project_root(), args.input)
|
||||
args.input = abspath(args.input)
|
||||
print("Input file: {0}\n".format(args.input))
|
||||
|
||||
strings = StringsTxt(args.input)
|
||||
strings.process_file()
|
||||
|
||||
if args.langs:
|
||||
args.langs = args.langs.split(",")
|
||||
print("Limit output to languages:\n{0}\n".format(",".join(args.langs)))
|
||||
|
||||
if args.print_langs:
|
||||
strings.print_languages_stats(langs=args.langs)
|
||||
|
||||
if args.print_defs:
|
||||
strings.print_definitions_stats(langs=args.langs)
|
||||
|
||||
if args.print_dups:
|
||||
strings.print_duplicates(langs=args.langs)
|
||||
|
||||
if args.print_mdups:
|
||||
strings.print_most_duplicated()
|
||||
|
||||
if args.print_similar:
|
||||
strings.print_most_similar()
|
||||
|
||||
if args.print_missing:
|
||||
strings.print_missing_translations(langs=args.langs)
|
||||
|
||||
if args.validate:
|
||||
if not strings.validate(langs=args.langs):
|
||||
# print in red color
|
||||
print("\n\033[0;31mThe file is not valid, terminating\033[0m")
|
||||
sys.exit(1)
|
||||
|
||||
if args.translate:
|
||||
if not args.output:
|
||||
args.output = True
|
||||
strings.translate(args.translate[0], args.translate[1])
|
||||
|
||||
if args.output:
|
||||
if args.output == True:
|
||||
args.output = args.input
|
||||
else:
|
||||
args.output = abspath(args.output)
|
||||
print("\nWriting formatted output file: {0}\n".format(args.output))
|
||||
strings.write_formatted(target_file=args.output, langs=args.langs, keep_resolved=args.keep_resolved)
|
|
@ -1 +0,0 @@
|
|||
Subproject commit 9ed9c04c53da70828d5594c2a48fcdf1096cdf37
|
|
@ -1,110 +1,6 @@
|
|||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Use ruby from brew on Mac OS X, because system ruby is outdated/broken/will be removed in future releases.
|
||||
case $OSTYPE in
|
||||
darwin*)
|
||||
if [ -x /usr/local/opt/ruby/bin/ruby ]; then
|
||||
PATH="/usr/local/opt/ruby/bin:$PATH"
|
||||
elif [ -x "${HOMEBREW_PREFIX:-/opt/homebrew}/opt/ruby/bin/ruby" ]; then
|
||||
PATH="${HOMEBREW_PREFIX:-/opt/homebrew}/opt/ruby/bin:$PATH"
|
||||
else
|
||||
echo 'Please install Homebrew ruby by running "brew install ruby"'
|
||||
exit 1
|
||||
fi ;;
|
||||
*)
|
||||
if [ ! -x "$(which ruby)" ]; then
|
||||
echo "Please, install ruby (https://www.ruby-lang.org/en/documentation/installation/)"
|
||||
exit 1
|
||||
fi ;;
|
||||
esac
|
||||
|
||||
THIS_SCRIPT_PATH=$(cd "$(dirname "$0")"; pwd -P)
|
||||
OMIM_PATH="$THIS_SCRIPT_PATH/../.."
|
||||
TWINE_PATH="$OMIM_PATH/tools/twine"
|
||||
|
||||
if [ ! -e "$TWINE_PATH/twine" ]; then
|
||||
echo "You need to have twine submodule present to run this script"
|
||||
echo "Try 'git submodule update --init --recursive'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
TWINE_COMMIT="$(git -C $TWINE_PATH rev-parse HEAD)"
|
||||
TWINE_GEM="twine-$TWINE_COMMIT.gem"
|
||||
|
||||
if [ ! -f "$TWINE_PATH/$TWINE_GEM" ]; then
|
||||
echo "Building & installing twine gem..."
|
||||
(
|
||||
cd "$TWINE_PATH" \
|
||||
&& rm -f ./*.gem \
|
||||
&& gem build --output "$TWINE_GEM" \
|
||||
&& gem install --user-install "$TWINE_GEM"
|
||||
)
|
||||
fi
|
||||
|
||||
# Generate android/iphone/jquery localization files from strings files.
|
||||
TWINE="$(gem contents --show-install-dir twine)/bin/twine"
|
||||
if [[ $TWINE == *".om/bin/twine" ]]; then
|
||||
echo "Using the correctly patched submodule version of Twine"
|
||||
else
|
||||
echo "Looks like you have a non-patched version of twine, try to uninstall it with '[sudo] gem uninstall twine'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
OMIM_DATA="$OMIM_PATH/data"
|
||||
STRINGS_PATH="$OMIM_DATA/strings"
|
||||
|
||||
# Validate and format/sort strings files.
|
||||
STRINGS_UTILS="$OMIM_PATH/tools/python/strings_utils.py"
|
||||
"$STRINGS_UTILS" --validate --output
|
||||
"$STRINGS_UTILS" --types-strings --validate --output
|
||||
"$STRINGS_UTILS" "$STRINGS_PATH/sound.txt" --validate --output
|
||||
"$STRINGS_UTILS" "$OMIM_DATA/countries_names.txt" --validate --output
|
||||
"$STRINGS_UTILS" "$OMIM_PATH/iphone/plist.txt" --validate --output
|
||||
|
||||
# Check for unused strings.
|
||||
CLEAN_STRINGS="$OMIM_PATH/tools/python/clean_strings_txt.py"
|
||||
"$CLEAN_STRINGS" --validate
|
||||
|
||||
STRINGS_FILE=$STRINGS_PATH/strings.txt
|
||||
TYPES_STRINGS_FILE=$STRINGS_PATH/types_strings.txt
|
||||
|
||||
# TODO: Add validate-strings-file call to check for duplicates (and avoid Android build errors) when tags are properly set.
|
||||
"$TWINE" generate-all-localization-files --include translated --format android --untagged --tags android "$STRINGS_FILE" "$OMIM_PATH/android/app/src/main/res/"
|
||||
"$TWINE" generate-all-localization-files --include translated --format android --file-name types_strings.xml --untagged --tags android "$TYPES_STRINGS_FILE" "$OMIM_PATH/android/app/src/main/res/"
|
||||
"$TWINE" generate-all-localization-files --format apple --untagged --tags ios "$STRINGS_FILE" "$OMIM_PATH/iphone/Maps/LocalizedStrings/"
|
||||
"$TWINE" generate-all-localization-files --format apple-plural --untagged --tags ios "$STRINGS_FILE" "$OMIM_PATH/iphone/Maps/LocalizedStrings/"
|
||||
"$TWINE" generate-all-localization-files --format apple --file-name LocalizableTypes.strings --untagged --tags ios "$TYPES_STRINGS_FILE" "$OMIM_PATH/iphone/Maps/LocalizedStrings/"
|
||||
"$TWINE" generate-all-localization-files --format apple --file-name InfoPlist.strings "$OMIM_PATH/iphone/plist.txt" "$OMIM_PATH/iphone/Maps/LocalizedStrings/"
|
||||
"$TWINE" generate-all-localization-files --format jquery "$OMIM_DATA/countries_names.txt" "$OMIM_DATA/countries-strings/"
|
||||
"$TWINE" generate-all-localization-files --format jquery "$STRINGS_PATH/sound.txt" "$OMIM_DATA/sound-strings/"
|
||||
|
||||
# Generate list of languages and add list in gradle.properties to be used in build.gradle in resConfig
|
||||
SUPPORTED_LOCALIZATIONS="supportedLocalizations="$(sed -nEe "s/ +([a-zA-Z]{2}(-[a-zA-Z]{2,})?) = .*$/\1/p" "$STRINGS_PATH/strings.txt" | sort -u | tr '\n' ',' | sed -e 's/-/_/g' -e 's/,$//')
|
||||
# Chinese locales should correspond to Android codes.
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/zh_Hans/zh}
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/zh_Hant/zh_HK,zh_MO,zh_TW}
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/he/iw}
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/id/in}
|
||||
GRADLE_PROPERTIES="$OMIM_PATH/android/gradle.properties"
|
||||
if [ "$SUPPORTED_LOCALIZATIONS" != "$(grep supportedLocalizations "$GRADLE_PROPERTIES")" ]; then
|
||||
sed -i.bak 's/supportedLocalizations.*/'"$SUPPORTED_LOCALIZATIONS"'/' "$GRADLE_PROPERTIES"
|
||||
rm "$GRADLE_PROPERTIES.bak"
|
||||
fi
|
||||
|
||||
# Generate locales_config.xml to allow users change app's language on Android 13+
|
||||
LOCALES_CONFIG="$OMIM_PATH/android/app/src/main/res/xml/locales_config.xml"
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/supportedLocalizations=/en,}
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS/,en,/,}
|
||||
SUPPORTED_LOCALIZATIONS=${SUPPORTED_LOCALIZATIONS//_/-}
|
||||
LOCALES_CONTENT='<?xml version="1.0" encoding="utf-8"?>
|
||||
<locale-config xmlns:android="http://schemas.android.com/apk/res/android">'
|
||||
set +x
|
||||
for lang in ${SUPPORTED_LOCALIZATIONS//,/ }; do
|
||||
LOCALES_CONTENT="$LOCALES_CONTENT"$'\n'" <locale android:name=\"$lang\" />"
|
||||
done
|
||||
LOCALES_CONTENT="$LOCALES_CONTENT"$'\n''</locale-config>'
|
||||
if [ "$LOCALES_CONTENT" != "$(cat "$LOCALES_CONFIG")" ]; then
|
||||
echo "$LOCALES_CONTENT" > "$LOCALES_CONFIG"
|
||||
echo Updated "$LOCALES_CONFIG" file
|
||||
fi
|
||||
echo "Regenerating localizations is no longer required. No more hassle." >&2
|
||||
echo "Please refer to the TRANSLATIONS.md file for updated instructions." >&2
|
||||
exit 1
|
||||
|
|
Reference in a new issue