Wikipedia parser that generates offline content embeddable into Organic Maps map mwm files
Find a file
Evan Lloyd New-Schmidt 0a0a94b484 Remove comments
Signed-off-by: Evan Lloyd New-Schmidt <evan@new-schmidt.com>
2023-08-15 18:37:43 -04:00
.github/workflows Add script 2023-08-07 17:05:03 -04:00
benches Add snapshot tests for html output 2023-08-15 18:37:43 -04:00
src Remove comments 2023-08-15 18:37:43 -04:00
tests Add denylist from Extracts API 2023-08-15 18:37:43 -04:00
.gitignore Initial rust setup (#1) 2023-05-30 19:00:05 +02:00
article_processing_config.json Add support for multiple languages 2023-06-23 15:50:04 -04:00
build.rs Save and log build commit 2023-08-07 17:05:03 -04:00
Cargo.lock Add snapshot tests for html output 2023-08-15 18:37:43 -04:00
Cargo.toml Add snapshot tests for html output 2023-08-15 18:37:43 -04:00
lib.sh Add script 2023-08-07 17:05:03 -04:00
LICENSE Initial commit 2023-05-30 16:01:35 +02:00
README.md Extract tags in parallel in rust 2023-08-10 09:37:58 -04:00
run.sh Extract tags in parallel in rust 2023-08-10 09:37:58 -04:00

wikiparser

Extracts articles from Wikipedia database dumps for embedding into the mwm map files created by the Organic Maps generator.

Extracted articles are identified by Wikipedia article titles in url or text form (language-specific), and Wikidata QIDs (language-agnostic). OpenStreetMap commonly stores these as wikipedia*= and wikidata= tags on objects.

Configuring

article_processing_config.json should be updated when adding a new language. It defines article sections that are not important for users and should be removed from the extracted HTML.

Usage

To use with the map generator, see the run.sh script and its own help documentation. It handles extracting the tags, using multiple dumps, and re-running to convert titles to QIDs and extract them across languages.

To run the wikiparser manually or for development, see below.

First, install the rust language tools

For best performance, use --release when building or running.

You can run the program from within this directory using cargo run --release --.

Alternatively, build it with cargo build --release, which places the binary in ./target/release/om-wikiparser.

Run the program with the --help flag to see all supported arguments.

$ cargo run --release -- --help
Extract articles from Wikipedia Enterprise HTML dumps

Usage: om-wikiparser <COMMAND>

Commands:
  get-articles  Extract, filter, and simplify article HTML from Wikipedia Enterprise HTML dumps
  get-tags      Extract wikidata/wikipedia tags from an OpenStreetMap PBF dump
  simplify      Apply the same html article simplification used when extracting articles to stdin, and write it to stdout
  help          Print this message or the help of the given subcommand(s)

Options:
  -h, --help     Print help (see more with '--help')
  -V, --version  Print version

Each command has its own additional help:

$ cargo run -- get-articles --help
Extract, filter, and simplify article HTML from Wikipedia Enterprise HTML dumps.

Expects an uncompressed dump (newline-delimited JSON) connected to stdin.

Usage: om-wikiparser get-articles [OPTIONS] <OUTPUT_DIR>

Arguments:
  <OUTPUT_DIR>
          Directory to write the extracted articles to

Options:
      --write-new-qids <FILE>
          Append to the provided file path the QIDs of articles matched by title but not QID.

          Use this to save the QIDs of articles you know the url of, but not the QID. The same path can later be passed to the `--wikidata-qids` option to extract them from another language's dump. Writes are atomicly appended to the file, so the same path may be used by multiple concurrent instances.

  -h, --help
          Print help (see a summary with '-h')

FILTERS:
      --osm-tags <FILE.tsv>
          Path to a TSV file that contains one or more of `wikidata`, `wikipedia` columns.

          This can be generated with the `get-tags` command or `osmconvert --csv-headline --csv 'wikidata wikipedia'`.

      --wikidata-qids <FILE>
          Path to file that contains a Wikidata QID to extract on each line (e.g. `Q12345`)

      --wikipedia-urls <FILE>
          Path to file that contains a Wikipedia article url to extract on each line (e.g. `https://lang.wikipedia.org/wiki/Article_Title`)

It takes as inputs:

  • A wikidata enterprise JSON dump, extracted and connected to stdin.
  • A directory to write the extracted articles to, as a CLI argument.
  • Any number of filters passed:
    • A TSV file of wikidata qids and wikipedia urls, created by the get-tags command or osmconvert, passed as the CLI flag --osm-tags.
    • A file of Wikidata QIDs to extract, one per line (e.g. Q12345), passed as the CLI flag --wikidata-ids.
    • A file of Wikipedia article titles to extract, one per line (e.g. https://$LANG.wikipedia.org/wiki/$ARTICLE_TITLE), passed as a CLI flag --wikipedia-urls.

As an example of manual usage with the map generator:

  • Assuming this program is installed to $PATH as om-wikiparser.
  • Download the dumps in the desired languages (Use the files with the format ${LANG}wiki-NS0-${DATE}-ENTERPRISE-HTML.json.tar.gz). Set DUMP_DOWNLOAD_DIR to the location they are downloaded.
  • Run a maps build with descriptions enabled to generate the id_to_wikidata.csv and wiki_urls.txt files.
  • Run the following from within the intermediate_data subdirectory of the maps build directory:
# Transform intermediate files from generator.
cut -f 2 id_to_wikidata.csv > wikidata_qids.txt
tail -n +2 wiki_urls.txt | cut -f 3 > wikipedia_urls.txt
# Enable backtraces in errors and panics.
export RUST_BACKTRACE=1
# Set log level to debug
export RUST_LOG=om_wikiparser=debug
# Begin extraction.
for dump in $DUMP_DOWNLOAD_DIR/*-ENTERPRISE-HTML.json.tar.gz
do
  tar xzf $dump | om-wikiparser get-articles \
    --wikidata-ids wikidata_qids.txt \
    --wikipedia-urls wikipedia_urls.txt \
    --write-new-qids new_qids.txt \
    descriptions/
done
# Extract discovered QIDs.
for dump in $DUMP_DOWNLOAD_DIR/*-ENTERPRISE-HTML.json.tar.gz
do
  tar xzf $dump | om-wikiparser get-articles \
    --wikidata-ids new_qids.txt \
    descriptions/
done

Alternatively, extract the tags directly from a .osm.pbf file (referenced here as planet-latest.osm.pbf):

# Extract tags
om-wikiparser get-tags planet-latest.osm.pbf > osm_tags.tsv
# Begin extraction.
for dump in $DUMP_DOWNLOAD_DIR/*-ENTERPRISE-HTML.json.tar.gz
do
  tar xzf $dump | om-wikiparser get-articles \
    --osm-tags osm_tags.tsv \
    --write-new-qids new_qids.txt \
    descriptions/
done
# Extract discovered QIDs.
for dump in $DUMP_DOWNLOAD_DIR/*-ENTERPRISE-HTML.json.tar.gz
do
  tar xzf $dump | om-wikiparser get-articles \
    --wikidata-ids new_qids.txt \
    descriptions/
done