OSM-import of the OpenGovernment TreeCadastre of Vienna

Yesterday the import of the OpenGovernment tree cadastre of vienna was performed. The data used for the import is from the 27th of November 2012.

The OGD Cadastre of Trees prior to the Import

The OGD Cadastre of Trees prior to the Import

The final dataset which was used for the import can be found here (http://gisforge.no-ip.org:5984/datastore/osm/ogd_trees_wien_selected.osm.bz2) while the trees that have been omitted can be downloaded here (http://gisforge.no-ip.org:5984/datastore/osm/ogd_trees_wien_notselected.osm.bz2).

Changes to the Original Method

The method used is the one as is described in previous posts with the exception that the substitution of certain entries in the original dataset was done by the python script and not manually by the user.

As was proposed by Friedrich Volkmann, every entry that does not fulfill the following criteria is marked with a the tag “fixme=Baum oder Strauch“.

if height <= 2 and (int(datetime.datetime.now().year) - year) >= 3

This means that every entry in the cadastre of trees which is smaller than 2 meters but is older than 3 years is suspiciously small in size and should be checked manually.

Omitted Trees

As can be seen by the following image (area around “Urban Loritz Platz”), already mapped trees could successfully be removed from the dataset. No trees are mapped multiple times or are removed.

Already mapped trees in OSM are not imported

Already mapped trees in OSM are not imported (green dots are already mapped trees)

The dataset with omitted trees can be downloaded by the link provided at the top of this post.

After the Import

The most impressing thing about the import are the parks. While it can get a bit confusing, the many green dots give a good idea of the tree cover.

Trees in the Stadtpark

Trees in the Stadtpark

Another significant display of the imported trees is their position besides streets:

Trees along Streets

Trees along Streets in the “Cottageviertel”

The display of trees should provide the reader of the map a more thorough impression of the area he is looking for. But this might also just be too much information and make the map less readable. This is open to debate.

Interestingly, the linear structure of the imported trees can be used to detect poorly mapped streets. This is the case when e.g. a row of trees crosses the actual street which is only true in rare cases.

Poorely mapped streets exposed by the OGD tree cadastre

Poorely mapped streets exposed by the OGD tree cadastre

Next Steps

What has yet to be performed is the conflation of attributes of already mapped trees with the OGD cadastre of trees. On problem that arises here is that the provided reference number for trees is not unique. Maybe it never was intended to be unique or the data provided does not contain all information necessary. Either way, one has to perform an additional search by location to distinctly identify a tree.

Also, the OGD cadastre of trees is constantly changing because trees are removed or new ones are planted. So it makes sense to think about a method to automatically keep the OSM trees up to date with the OGD cadastre of trees.

OGD Vienna Preview

If you ever want to simply preview any of the data released by the city of Vienna by their Open Government Data initiative on top of an OpenStreetMap layer, you now can do so.

I present the “OGD Wien Preview and Matcher”. It can be accessed by http://www.gisforge.com/ogdmatcher .

Title Screen of the OGD Wien Preview and Matcher, version 1

Title Screen of the OGD Wien Preview and Matcher, version 1

With this web application it is possible to display any spatial dataset available on the servers of OGD Vienna. The list which displays the datasets is generated automatically each time the application is accessed. So, it always reflects the current state. Also, only when selecting a specific set of data, it is loaded from the OGD source. The downside of this method is that there is a slight delay while waiting for the chosen dataset to arrive. But on the other hand, it is not necessary to store this data anywhere. Additionally to that, the displayed information is always the most recent one.

At this moment information is displayed by entities filled only with yellow colour and black border. When too many points are displayed at once, they are grouped together. Line and polygon features are always displayed as they are. When clicking on a feature all information associated with it is visualized. It has to be mentioned that because of the experimental nature of the methods used to construct this application, umlauts are not encoded.

A possibility to compare point datasets from OGD Vienna with a copy of OpenStreetMap data is given. Due to the bad performance of the server the procedure of comparison runs very slow and is not of much use until now.

The web application has a problem with Microsoft Internet Explorer that I yet have to track down. While data is loaded with this browser, it takes for ever to process its display on the map.

Basemap.at WMTS Layer in QGis

QuantumGIS as it is now has no specific entry in its menu to add a WMTS layer. But with the gaining availability of web maps that are offered as tiled maps this becomes increasingly important.

The states of Austria work together to provide a commonly designed map of the whole country offered under the CC-BY 3.0 AT licence. This initiative is called basemap.at and is accessible by the WMTS protocol.

Despite of the absence of an easily accessible manner to add such a layer to QuantumGIS, it is still possible to do so by rerouting the WMTS management directly to the GDAL driver used by QGis to manage raster layers. Since version 1.7 GDAL itself supports WMTS layers. Michael Douchin describes in his blog post OpenStreetMap Tiles in QGIS how to create an XML file containing parameters to connect to the OpenStreetMap WMTS server. The specification of the GDAL-WMTS driver can be viewed on its web page. This XML file can be opened in QGis as a raster source and, given the projection is set right, works without any further modification.

The contents of the XML file needed to specify the connection to the basemap.at WMTS server are:

 <Service name="TMS">
 <Cache />

The values within “DataWindow” need further optimization. When I find the time I will take a closer look at the driver specifications and post a corrected version of this file, but this one already works even though Vorarlberg is clipped.

OSM in CouchDB on Raspberry Pi

The Raspberry Pi is a cheap credit-card-sized ARM-computer with 700MHZ and 256MB of RAM that consumes only about 3.5W energy. It costs about 30€ and runs an optimised debian-linux named “Raspbian“. Since I have heared about it I wanted to see how it performs with a CouchDB installation. CouchDB is a document-based database that should perform well under low-ram circumstances. This is perfect for the Raspberry Pi. There exists a spatial extension named Geocouch which allows the use of a spatial index.


What could be the benefits of this kind of system? Clearly the benefits only lie within a system-architecture that only seldom updates the data but regularily queries it. The swapping of an OSM-database to an autonomous hardware makes the system independet from the main computer. Also, depending on further research on possible queries, such a system could prove to be a versatile information-gateway for OSM-data and lift some weight on heavily used default APIs offered by the OSM-project. With a setup like this, it could be possible for everyone to set up a information-delivering API that is cost-effective to set-up and to contain.

Set Up

So, it was time to test this configuration against an OpenStreetMap dataset! The setup of CouchDB was not 100% straightforeward, after installing the package I had to modify the start-up script because there was a problem with the ownership of an automatically created directory. For the installation of Geocouch I had to compile it by myself and again modify the start-up script since the method proposed in the readme of the Geocouch-project did not work for me. (I will not go into more detail about setting this environment up, but maybe I will write a post about this later on)

CouchDB running on Raspberry Pi

CouchDB running on Raspberry Pi

On the hardware-side, additionally to the 8GB-SD card that held the system, I attached a USB-HDD with 400GB space on it. I had to reconfigure the CouchDB configs to relocate its storage of the database as well as its views to a directory on the USB device.

Used Data

I had two datasets at hand: First all points in the area of vienna, which sum up to 445220 entries and second the complete OSM-dataset for Austria.

Preparing Data

To convert OSM data to JSON format and batch-upload it to the CouchDB I used the method and tools described on the OSMCouch page of the OpenStreetMap Wiki. It uses the great OSMIUM framework in combination with a custom description-file to generate GeoJSON compatible output. Preparing the dataset for the extent of Vienna was not that time-consuming, whereas the one for Austria took quite some time to process and resulted in a 6.1GB JSON file. The pre-processing of these datasets was not done on the Raspberry Pi but on a much more powerful computer.

Upload to CouchDB

Uploading was done by a little script also mentioned on the OSMCouch page named “chunkybulks.py”. What this script does is just take a huge JSON-file and upload it to a specified CouchDB-database in chunks of 10.000 entries (the size can be specified manually). If one would try to upload a big file at once, the server most probably could not cope with that. On the same page there is a note saying that the software ImpOSM soon will come with integrated CouchDB support but since it was not yet available I sticked to the old but still reliable method.

Bulk Uploading with the "chunkybulks.py" script from user "emka"

Bulk Uploading with the “chunkybulks.py” script from user “emka” (the display “of 0” is because the total numbers of entries could not be determined in a fast way since the file is processed sequentially)

This method worked fine when uploading the Viennese points, but with the complete dataset of Austria, it crashed after 450.000 inserted entities. I guess this was because the CouchDB-server could not response fast enough because the bulk size was too large (10.000). After changing it to 1.000 it uploaded entries but it was terribly slow.

Index Generation, Queries

Now came the critical part: Simply put (maybe too simply, but to get the general idea it is good enough), in CouchDB queries are pre-defined by java-script functions. Such a pre-defined query  is called a “View”. For faster access, CouchDB generates an index per view. The generation of the index is very time-consuming but once the index is available any future query will be very fast. I’m especially interested in the performance of these kind of queries which rely on an already defined index.

So, I had to create a view – preferably a spatial-view (for a more detailed explanation on this topic, please take a look at the Geocouch-Dokumentation) and execute it once to trigger the generation of an index. As expected, the generation of the index for all 445220 entries in the Vienna-dataset took hours. The index generation for the dataset of Austria took days.

When querying the dataset there is a slow delay noticeable but it happens quite fast considering the power of the RaspberryPi and the size of the database.

Going Further

One interesting question is: What will happen if I use the complete world-file? Given enough hard-disk space and time, the upload and generation of indices should be possible – but how fast will the query be?

Another thing wich will be interesting is how the import method of the future version of imposm will perform, especially since it has support for DIFFs!

Preparing the OpenGovernment TreeCadastre of Vienna for OSM-import (2)

As Friedrich Volkmann from the Austrian OSM-Mailinglist proposed, the single entries from the OpenGovernmentDataset “Baumkataster” do not only include trees, but also shrubs. So, this dataset would better be named “Wood Cadastre” than “Tree Cadastre”. The problem hereby is, that the definition for the OSM tag “natural=tree” only includes trees. So, there has to be applied an additional filtering mechanism. Friedrich proposed to make the decision based on the height of the tree in relation to his age.

I implemented his proposal by checking if the tree is smaller than 2 meters while older than 3 years:

height <= 2 and (int(datetime.datetime.now().year) - year) >= 3

Additionally to this, it would be best to define unique rules for each of the over 90 types of trees. But this is a huge amount of work and in my opinion it is questionable whether this leads to better results or not. After doing some experiments it all comes down to only a handful of trees which would be excluded. With the current general implementation this comes to 678 trees.

It is important to mention that any tree excluded by this method is not ignored but still imported. The difference is, that it is assigned another special tag: “fixme=Baum oder Strauch” (tree or shrub). Only by checking manually the real habit of the plant can be determined.

Andreas Trawoeger is currently working on a yet to be released live-preview-map which is overlaying the trees of the OGD-dataset with the ones already inside OSM. This will give a better overview of the extent of the dataset in question.

Preparing the OpenGovernment TreeCadastre of Vienna for OSM-import (1)

The city of vienna has opended access to some of its geodata to the public. The license under which it is published is compatible with OpenStreetMap, therefore there should be no legal reason not to include any of it into the OSM-database. One of this datasets is the cadastre of trees. ( for the geometrical analysis with maps, scroll down! )

Choosing the Format

The cadastre of trees may be downloaded in various different formates among which are GML, JSON, Shapefile, KML, GeoRSS and CSV. First I went on with the Shapefile format since it is well-proven and there are different ways to access it from different programming languages. But for reasons explained later, CSV is the format to go.

Attribute Data

I used QuantumGIS to inspect the downloaded data.

Looking at the Data-Structures

When looking at the data-structue of the file, one can see the following columns:

  • tree-number (“BAUMNUMMER”): a unique number by which the tree can be identified unmistakably
  • area (“GEBIET”): the kind of surrounding of the tree
  • street (“STRASSE”): name of the street where the tree is located
  • type (“ART”): a string consisting of the latin name, the cultivar and the german name
  • year of plantation (“PFLANZJAHR”)
  • circumference of the stem (“STAMMUMFANG”): in meters
  • diameter of the crown (“KRONENDURCHMESSER”):
  • height (“BAUMHÖHE”): the height of the tree in meters
  • geometry: the actual position of the tree in geographical lat-long
A quick glimpse at the page for the tag “natural=tree” at the OSM-wiki gives an overview over the proposed tags for trees:
  • type: This distinguishes just between “broad_leaved”, “conifer” or “palm” trees. This information has to be calculated out of the “ART” field from the OGD-dataset.
  • genus: The genus is just the first part of the latin name and has to be extracted from the “ART” field
  • species: Here the complete latin name is stated
  • taxon: The taxon is for describing the taxonometry of the tree in greater detail. More information about this can be found on this OSM-wiki page.
  • sex: The sex of the tree
  • circumference: The circumference of the stem in meters.
  • height: The height in meters.
  • name: The name tag should only be used when it describes a very special tree.

 Converting the Data-Structures

The tree-number may be left out. It would be possible to identify the tree later on when maybe applying any updates to the imported dataset, but since there is no tag recommended for data like this, this would add inconsistency to the OSM-database. Also, any updates done later on can identify the tree by its location. There is no information about the sex or the name of the tree, so this information is left out. The circumference in the OSM-database is measured in meters and refers to the stem. So, this value is taken from the “STAMMUMFANG” field which is apparently in centimeters and needs to be converted. Height is the same in both datasets. The diameter of the crown has no appropriate tag in the OSM-naming scheme. This is quite disturbing since I see many cartographic possibilities to use this value. I decided to still include this value with the import by using a tag called “diameter_crown“, like it is proposed on the tree-3D-visualisation pagein the OSM-wiki.

Extraction of Genus and Species

The genus and specieshave to be extracted from the “ART” field. This is done with a python script. The “ART” field is just a string which contains the complete latin name, sometimes the cultivar in single quotes followed by the german name in parenthesis. An example: In the  string

Tilia cordata 'Greenspire' (Stadtlinde)

“Tilia” corresponds to the genus, the species is “Tilia cordata”. “Greenspire” stands for the cultivar and “Stadtline” is the german name.

The Cultivar / Taxon

It is a bit more challanging with the taxon. According to the OSM-wiki-page for taxon, it may contain any latin specification of the botanical name, even the cultivar. Also, the botanical name can be split into its parts by using sub-tags like taxon:cultivar=* . It is a bit unclear to me whether to use the genus/species tag or go on with only “taxon:species” and “taxon:genus”. I consider it best practice to stick with simple “genus” and “species” and include the cultivar with “taxon:cultivar”. The taxon itself is also extraced with the help of a python script. There are some entries that contain two cultivars separated by a comma. This disturbes the dissection process of the “ART”-field. Also, it does not make sense to include two cultivars in the OSM-database. Therefore, the values posing problems are identified manually and removed from the input-CSV before processing it with the python-script. This values and their chosen value are:

"Sumach, Essigbaum" -> Essigbaum
"Kiefer, Föhre" -> Kiefer
"Schwarzkiefer, Schwarzföhre" -> Schwarzkiefer
There are two more entries that need to be changed:
“Malus spec. ,Apfel” -> Malus spec. (Apfel)
“Juglans nigra, Schwarznuss” -> Juglans nigra (Schwarznuss)

Determining the Type

The type is not hardwritten in the OGD-dataset but can be determined by looking a the genus of the tree. For this purpose a list of comparisons is used inside the python script:

 if genus == "": ttype = ""
 if genus == "abies": ttype = "conifer"
 if genus == "acer": ttype = "broad_leaved"
 if genus == "aesculus": ttype = "broad_leaved"
 if genus == "ailanthus": ttype = "broad_leaved"
 if genus == "albizia": ttype = "broad_leaved"
 if genus == "alnus": ttype = "broad_leaved"
 if genus == "amelanchier": ttype = "broad_leaved"
 if genus == "araucaria": ttype = "conifer"
 if genus == "baumgruppe": ttype = ""
 if genus == "betula": ttype = "broad_leaved"
 if genus == "broussonetia": ttype = "broad_leaved"
 if genus == "buxus": ttype = "broad_leaved"
 if genus == "calocedrus": ttype = "conifer"
 if genus == "caragana": ttype = "broad_leaved"
 if genus == "carpinus": ttype = "broad_leaved"
 if genus == "castanea": ttype = "broad_leaved"
 if genus == "catalpa": ttype = "broad_leaved"
 if genus == "cedrus": ttype = "conifer"
 if genus == "celtis": ttype = "broad_leaved"
 if genus == "cercidiphyllum": ttype = "broad_leaved"
 if genus == "cercis": ttype = "broad_leaved"
 if genus == "chamaecyparis": ttype = "conifer"
 if genus == "cladrastis": ttype = "broad_leaved"
 if genus == "cornus": ttype = "broad_leaved"
 if genus == "corylus": ttype = "broad_leaved"
 if genus == "cotinus": ttype = "broad_leaved"
 if genus == "cotoneaster": ttype = "broad_leaved"
 if genus == "crataegus": ttype = "broad_leaved"
 if genus == "cryptomeria": ttype = "conifer"
 if genus == "cupressocyparis": ttype = "conifer"
 if genus == "cupressus": ttype = "conifer"
 if genus == "cydonia": ttype = "broad_leaved"
 if genus == "davidia": ttype = "broad_leaved"
 if genus == "elaeagnus": ttype = "broad_leaved"
 if genus == "eucommina": ttype = "broad_leaved"
 if genus == "exochorda": ttype = "broad_leaved"
 if genus == "fagus": ttype = "broad_leaved"
 if genus == "fontanesia": ttype = "broad_leaved"
 if genus == "frangula": ttype = "broad_leaved"
 if genus == "fraxinus": ttype = "broad_leaved"
 if genus == "ginkgo": ttype = "ginkgo"
 if genus == "gleditsia": ttype = "broad_leaved"
 if genus == "gymnocladus": ttype = "broad_leaved"
 if genus == "hibiscus": ttype = "broad_leaved"
 if genus == "ilex": ttype = "palm"
 if genus == "juglans": ttype = "broad_leaved"
 if genus == "juniperus": ttype = "conifer"
 if genus == "koelreuteria": ttype = "broad_leaved"
 if genus == "laburnum": ttype = "broad_leaved"
 if genus == "larix": ttype = "broad_leaved"
 if genus == "liquidambar": ttype = "broad_leaved"
 if genus == "liriodendron": ttype = "broad_leaved"
 if genus == "maclura": ttype = "broad_leaved"
 if genus == "magnolia": ttype = "broad_leaved"
 if genus == "malus": ttype = "broad_leaved"
 if genus == "metasequoia": ttype = "conifer"
 if genus == "morus": ttype = "broad_leaved"
 if genus == "nadelbaum": ttype = "conifer"
 if genus == "ostrya": ttype = "broad_leaved"
 if genus == "parrotia": ttype = "broad_leaved"
 if genus == "paulownia": ttype = "broad_leaved"
 if genus == "phellodendron": ttype = "broad_leaved"
 if genus == "photinia": ttype = "broad_leaved"
 if genus == "picea": ttype = "conifer"
 if genus == "pinus": ttype = "conifer"
 if genus == "platanus": ttype = "broad_leaved"
 if genus == "platycladus": ttype = "conifer"
 if genus == "populus": ttype = "broad_leaved"
 if genus == "prunus": ttype = "broad_leaved"
 if genus == "pseudotsuga": ttype = "conifer"
 if genus == "pterocarya": ttype = "broad_leaved"
 if genus == "pyrus": ttype = "broad_leaved"
 if genus == "quercus": ttype = "broad_leaved"
 if genus == "rhamnus": ttype = "broad_leaved"
 if genus == "rhus": ttype = "broad_leaved"
 if genus == "robinia": ttype = "broad_leaved"
 if genus == "salix": ttype = "broad_leaved"
 if genus == "sambucus": ttype = "broad_leaved"
 if genus == "sequoiadendron": ttype = "conifer"
 if genus == "sophora": ttype = "broad_leaved"
 if genus == "sorbus": ttype = "broad_leaved"
 if genus == "tamarix": ttype = "broad_leaved"
 if genus == "taxus": ttype = "conifer"
 if genus == "tetradium": ttype = "broad_leaved"
 if genus == "thuja": ttype = "conifer"
 if genus == "thujopsis": ttype = "conifer"
 if genus == "tilia": ttype = "broad_leaved"
 if genus == "toona": ttype = "broad_leaved"
 if genus == "tsuga": ttype = "conifer"
 if genus == "ulmus": ttype = "broad_leaved"
 if genus == "zelkova": ttype = "broad_leaved"

The list of geni is complete since I used the “List Individual Values” function of QuantumGIS to get all possible values for genus.

Converting Data to OSM-compatible Format

I tried to make the python script work with the SHP-file, but the python module “pyshp” apparently has problems with the encoding and the OGR-module quits the process with a segmentation fault. Currently the script takes the CSV-file as an input and outputs a newly created SHP-file. This file can be imported by JOSM using the “opendata” plugin and then be uploaded to the OSM-database. But there is a problem: A shapefile can only hold up to 8.3 characters in the attribute discription which truncated some values like “diameter_crown” to “diameter_cr”. So, the way to go is to again create a CSV-file by the script. This proved to be easy to implement. Sadly, JOSM does not import the CSV-file but gets stuck during the process (this is also true for ODS files – in fact every other format than KML had some disadvantages, e.g. unsupportet encoding, inclusion of the lat-lon as tags, …) . So, one can use QuantumGis to convert the CSV to KML which can be read by JSOM without any problems. The python-script produces an output-CSV which has to be converted to UTF8-encoding. Otherwise, QuantumGis will not display any special-characters like “ä”, “ü” or “ö” and will remove them from the dataset. This can be done by one of the many text-editors available (e.g. with linux: “Gedit” or “Geany”).

Geometry Information

It is important not to replace any existing trees in the OSM-database or create duplicate entries. Therefore, the data is analysed by using QuantumGis.


Currently there are 2.996 trees mapped in Vienna, most of them in the 8th and 7th district.

Tree Coverage Vienna - OSM

OpenStreetMap Tree Coverage in Vienna (©OpenStreetMap und Mitwirkende, CC BY-SA)

Many of these are located inside courts, so they don’t collide with the OGD-dataset which only contains public trees (which in turn are rather located on open streets than on private areas) as can be seen by the example in the following graphic:

OSM in court, OGD on street

OSM in Court, OGD on Street (OGD Wien, ©OpenStreetMap und Mitwirkende, CC BY-SA)

The OpenGovernmentDataset contains 120.951 treeslocated in all areas of Vienna:

Tree Coverage Vienna - OGD

OpenGovernmentData Vienna Tree Coverage (OGD Wien, ©OpenStreetMap und Mitwirkende, CC BY-SA)

It is easy to see that the total distributed coverage is much better with the OGD-dataset. Additionally, the OSM-dataset contains no information about tree-types, height or else.

Positional Accuracy

As can be seen by the following examplary graphic, many of the trees that are already mapped are located at nearly the same spot as their OGD-counterparts.

Positional Accuracy OSM vs OGT Trees

Positional Accuracy OSM vs OGT Trees (OGD Wien, ©OpenStreetMap und Mitwirkende, CC BY-SA)

This high positional accuracy makes it easy to identify and leave out any already existing trees. These trees will be aggregated in an own file for later (manual?) processing. I made a positional check with buffer-fields around the OSM-trees. These buffers go every meter from 1 meter to 9 meter. The results are presented in the following table. the numbers are the points that overlap with the buffer. The “+ More” field shows how much more trees were selected in comparison to the buffer one meter smaller.

Buffer Size # of Trees Contained + More
1 meter 1034
2 meter 1124 + 90

3 meter

1136 + 12
4 meter 1145 + 9
5 meter 1158 + 13
6 meter 1203 + 45
7 meter 1227 + 24
8 meter 1239 + 12
9 meter 1250 + 11
Increase of tree number when expanding search radius

Increase of Number of Trees when Expanding Search Radius

As can be seen, there are more and more trees selected when expanding the search radius. Until a buffer-size of 5 meters, the amount of additional trees selected is mostly decreasing. From 5 meters it is increasing again which may be because trees may be counted twice because of overlapping buffer-zones. This value defines the upper limit of trees not suitable for import. All trees within a search radius of 5 meters (better choose the higher value to be sure) will not be imported. This will result in a total amount of 1.158 trees that are not imported and processed for later manual checking.

Preparation of Geometry Information

To exclude the unwanted trees and save them in a separate file for later processing, again QuantumGis can be used.

Manual Refinement

After selecting the trees that can be imported, suspicious values like a redicilously high “diameter_crown” have to be removed.

In JOSM the data can be refined even more. This step could have been included in the python script, but it is quite easy to do manually. There is the tag “species=baumgruppe”. This does not make sense. These “baumgruppe”n will be included in the final upload, but only as “natural=tree” without any additional information. With JOSM we can search for “baumgruppe” and remove the undesired values at once for all found trees. There are also some empty attributes. They can easily be found and removed with the “validator” plugin. Just select all elements, and perform a validation. Then select all occuring problems and click on “Fix”. The empty attributes should be deleted automatically. To speed up this process I deactivated all but the needed checks in the options.


By now the OGD-tree data should be refined and ready to upload !