Tag Archive | osm

Update on the OGD Cadastre of Trees of Vienna in OpenStreetMap

Nearly 2 years ago the OpenGovernment cadastre of trees of the city of Vienna was imported into OpenStreetMap. All trees existing in OSM were not imported to not damage already mapped data. The thing still missing is to check which trees are already mapped and did not originate from the original import.

This post is a short update on my way to a script to automatically align the not yet entered OGD trees with already mapped OSM trees. So far, I wrote a python script that checks for existing entries by location. Thanks to spatial indexing and the very easy to use rtree library, this check can be done within about one minute but I’m not yet convinced by its results. The image below shows an excerp of the results of this analysis.

Preliminary analysis of matching trees

Preliminary analysis of matching trees

One thing that can be seen is that the city of Vienna added some more small parks to the dataset. As stated above, don’t take this image too seriously, it displays just preliminary results.

I’m currentrly thinking about re-implementing the script directly within the great QGIS Processing toolbox. This would make geographic debugging a lot easier since one can use the QGIS map window for outputting results while processing data.

The Complete World of OSM on limited Ressources

OpenStreetMap Data

For certain projects it might be necessary to work on the complete dataset of the OpenStreetMap project (OSM). My preferred way of using data from OSM is usually done by performing the following steps:

  • download the OSM data
  • selecting my area of interest by clipping it to my preferred extent
  • importing the data into a PostGIS database

The complete database-dump of OSM is called the “Planet File” and weights at the moment about 36GB in its compressd form.

OSMplanet

A dump of the comlete OSM database can be downloaded at http://planet.osm.org

To clip the data I usually resort to a very handy command-line tool called Osmium where you are able to define a region of interest and/or specify certain tags to filter the data by. The import into a PostGIS database can be performed by multiple tools, of which I prefere one called Imposm3 because of its speed. But still, when using it on the complete planet file with limited ressources (I used a dual core CPU and 8GB ram), it gets terrible slow and does not complete without error. My guess is, that the index which is created during the import procedure to access the data itselfe becomes so large that the access to it is not fast enough. For the extract of alone the continent of Europe, the index is more than 20GB.

Split Data

The logical thing to do is to split the data prior to import and use different databases for each part of the dataset. When using the data later on, one will have to apply whatever further steps are taken to each database individually, but still – when not using too many splits, this should be not a lot of hassle.

What would be more reasonable than to split by continents? Luckily the company Geofabrik already offers extracts of the planet file split by continents and countries, if prefered.

Dowload page for extracts by continent of the complete OSM database (http://download.geofabrik.de)

Dowload page for extracts by continent of the complete OSM database (http://download.geofabrik.de)

Instead of splitting the complete dataset on my own, I wrote a small script which downloads each continent on its own and then goes on to import each file in a unique PostGIS database. This procedere was quite fast for all countries, except Europe.

Still too Large

If you take a closer look at the metadata of each extract by continent, you can see, that Europe is by far the largest one. While the import of the other continents was done in a matter of hours, the one of Europe took over a day before my computer automatically logged off and on again. Sadly, I have no idea what happened in detail since I was not present when this incident occured.

But then I got another idea. I observed that the import procedure of Europe was happening at an remarkably slower rate than the other ones. What if there was some kind of internal timeout? The amount of system ressources used during the import did not change during the firt two hours. Assumin it dit not change during the complete import procedure, this can not be the reason for stopping.

The complete import procedure was done on a traditional harddrive. The PostGIS database also is located on the same HDD. What I did next, was to specify the -cachedir parameter of Imposm3 to use my internal SSD drive. This should speed up sequential access to the readout of the index which happens a lot during the import.

And so it happened! The import itself was about 10 to 15 times faster when using the SSD as storage medium for the temporary index. This is still not as fast as with the other continents, but still ways faster than before. The source file itself as well as the PostGIS database remained on the HDD. This was great news, since the index is generated only once but accessed millions of times. As far as I know, only write access to SSDs wears them down.

Points from all 8 databases visualized in QGIS

Points from all 8 databases visualized in QGIS

Border Regions

A problematic situation migth occure when paying special attention to the areas on the border between two continents. Since the sliced data overlaps for a certain amount, there is redundand information. In case for the “places” layer shown in the image above, I merged all input datasets by the “Merge shapes layers” SAGA command in the Processing toolbox of QGIS. After that I could apply a cleaning algorithm to the dataset. This might not be practical (or even feasable) with very big layers, so there has to be found another solution.

To avoid this problem, one could split the planet file into regions by oneself and care to cut sharply.

Recommendation

I can recommend using the “continent split planet” way of importing OpenStreetMap data when not having access to a big server with loads of RAM and cpu-cores. The hassel of e.g. reapplying the same cartographic design to each of the continents individually is within limits, since there are only 7 (in case of the splits 8) of them.

Book: Schwarzplan

(for more information look at the books website or directly buy it at epubli.de or amazon.de )

Do you know what a “Schwarzplan” is? In English it is called “figure-ground diagram” and is a map showing all built areas as black and everything else in white colour (including the background). Here is an example of such a map for the area of Paris.

Figure-ground diagram of Paris

Figure-ground diagram of Paris

From such a map you can gain insights into many different aspects of urban structures. So it is for example possible to recognize hints of the age of parts of the city, their kind of reignship, their history of development, suspected density and many more. Furthermore, figure-ground diagrams have an aesthetically appealing effect and strongly underline the differences between different cities when comparing them.

Up until now, there was no atlas focussing on this kind of map which was sad . This might be due to the fact that one would have to get costly data about the buildings for each city individually But thanks to the OpenStreetMap Project I can present to you the atlas of figure-ground diagrams, a book called:

Cover of Schwarzplan

Cover of Schwarzplan

“Schwarzplan” is an atlas displaying figure-ground plans of 70 different cities all over the world together with some basic statistical information about their populations and extent on 152 pages made out of high quality glossy paper.

You can take a more thoroughly look on the books website or directly buy it at epubli.de or amazon.de .

Android: OÖ Quiz

Since a week ago you can download the Upper Austria Quiz from the Google Play store: OÖ Quiz on Google Play

Mascot of the OOE Quiz

Mascot of the OOE Quiz

Intro

I always wanted to learn how to program mobile devices. During Christmas holidays the government of Upper Austria called for mobile apps using their OpenGovernmentalData. So, the chance was taken and Kathi and me started to design an Android app. Actually it is an hybrid app. This means that you develop your complete app as an HTML5 project and bundle it with a small server. The result is a valid android APK which behaves like a native app. The advantages are that the toolkit we used to generate the package (Apache Cordova) is capable of producing running binaries for a whole bunch of operating systems, including Windows 7, BlackBerry, Symbian, WebOS, Android and, if you can afford it, iOS. Sinc we can’t pay 100$ per year to be allowed to just push the Quiz to the AppStore, there is no version for iPhone.

Toolkit

Since we both own Android devices, the app is optimized for Android devices. Still, there were some difficulties. First, we had to decide on a framework. jQuery Mobile seemed to be a reasonable choice at that time since it is well known and I already are used to default jQuery. Next time, I will go with a toolkit supporting any kind of MVC architecture that avoides usage of HTML whenever possible. One of these would be e.g. Kendo UI. But we chose jQuery Mobile and stuck to it.

Challanges

The size of the toolbar at the bottom was a perculiar problem: While it was easy to adjust the size of the font depending on different screen resolutions of different devices, the toolbar would not change its display height.

Toolbar of the OOE Quiz

Toolbar of the OOE Quiz

We decided that despite its fixed behaviour it will be usable in any case, even when being very small and optimized it for the lowest screen resolution we could find. This means the app runs on even very small devices.

Map Display

The map background is variable in size and automatically adjusts to the display size.

Map view of the OOE Quiz

Map view of the OOE Quiz

This is possible because we used the Leaflet JS library which is a GIS capable mapping library for JavaScript. The data displayed in this case are GeoJSON objects. So, it is easy to modify and add new information to the map at any time. Even the questions and their right answer are actually GeoJSON objects!

Questions

The questions are generated out of OGD and OSM data. The most work was not to generate the questions which was mostly a case of combining a fixed phrase with a word describing a location, but to manually filter these questions for plausability. It just does not make any sense to let the quiz ask for a very small mountain which only a few people of Austria know by name themselves. The same goes with lakes – in this case we filtered by their area.

Further Thoughts

It was a fun project and I find myself playing the quiz which is a good sign. Kathi and me, we already have ideas to extend the OÖ Quiz and even have plans for a quite advanced second version with multiplayer support – it depends on our free time whether we will implement it or not.