High performance CouchDB uploader for Python

Today I want to talk about my very first python module that I did publish on PyPI, the Python package index. It is called mpcouch and performs a single task, I was surprised to find no module for. But first some theory!

Uploading to CouchDB as it is

To upload a documents to a CouchDB database, it simply gets pushd to the databases HTTP REST interface. If you have more than one document to store, you repeat this step. This can become a very slow process when you have to add thousands or even millions of documents.

Therefore, CouchDB supports to upload a set of documents as a whole. This is called batch-upload an performs tremendously faster than single document uploads. The existing couchdb module for python offers this way to upload data with its API.

The tedious part

This sounds very good so far, but here comes my problem: When using CouchDB I usually deal with a very high amount of data I have to initially fill the database with. In most cases this lookes like this: Take some data, perform some preparatory steps on it, make the document entry for the database ready, and finally push this single document to the database (or collect them to store them later as a batch). It would be very comfortable to just have to push a single document to the database and the database driver would only collect these documents. When enough have been collected, they would be batch-uploaded to the database while possible further documents are collected again. Ideally, this batch-upload would work in parallel to the collection of new documents, so the actual processing does not have to hold.

The mpcouch module

This is exactly what the mpcouch module does. After importing the module with

import mpcouch

you can create a new mpcouchPusher object that now represents your bulk-uploading interface to a CouchDB:

myCouchPusher = mpcouch.mpcouchPusher( "http://localhost:5984/myDatabase", 30000 )

The first parameter is the URL to an existing CouchDB database. It has to be specified in this way so it is easy for the user to switch to a non-local CouchDB. The second parameter (“30000”) represents the amount of doucments that have to be collected before a batch-upload is performed. This value seems to be a reasonable value for my system – it might be different with yours.

You now can save a document to the database by using the pushData function of the pyCouchPusher object:

myCouchPusher.pushData( MyPythonJSONDocument )

The document you push has to be formatted as a Python object representing a valid JSON document in the same way as the couchdb module would require. The mpcouch module now collects all incoming documents and bulk-uploads them as soon as 30000 of them have been received. One great thing is, that to do so, the module spawns an own process to perform the upload while it continues to catch new documents. Your program does not have to wait for the database upload to finish!

When you have generated you last document and want to finish your database upload procedure, it is important to call the method

myCouchPusher.finish()

This assueres all still running batch-uploading processes are finished and the last batch of accumulated documents is uploaded before the program is allowed to continue.

Still not perfect

Of course there are still many rough edges around this package. For example, a new uploading process is started every time when enough documents have been collected – no matter how many uploading processes are already running. If the upload performs to slowly more and more upload batches are started in parallel which in turn make the overall upload even more slower. By implementing a simple process-cap this could be solved quite easily. Until then one has to find the right limit value for the size of the batch upload for any given scenario.

Update on the OGD Cadastre of Trees of Vienna in OpenStreetMap

Nearly 2 years ago the OpenGovernment cadastre of trees of the city of Vienna was imported into OpenStreetMap. All trees existing in OSM were not imported to not damage already mapped data. The thing still missing is to check which trees are already mapped and did not originate from the original import.

This post is a short update on my way to a script to automatically align the not yet entered OGD trees with already mapped OSM trees. So far, I wrote a python script that checks for existing entries by location. Thanks to spatial indexing and the very easy to use rtree library, this check can be done within about one minute but I’m not yet convinced by its results. The image below shows an excerp of the results of this analysis.

Preliminary analysis of matching trees

Preliminary analysis of matching trees

One thing that can be seen is that the city of Vienna added some more small parks to the dataset. As stated above, don’t take this image too seriously, it displays just preliminary results.

I’m currentrly thinking about re-implementing the script directly within the great QGIS Processing toolbox. This would make geographic debugging a lot easier since one can use the QGIS map window for outputting results while processing data.

The Complete World of OSM on limited Ressources

OpenStreetMap Data

For certain projects it might be necessary to work on the complete dataset of the OpenStreetMap project (OSM). My preferred way of using data from OSM is usually done by performing the following steps:

  • download the OSM data
  • selecting my area of interest by clipping it to my preferred extent
  • importing the data into a PostGIS database

The complete database-dump of OSM is called the “Planet File” and weights at the moment about 36GB in its compressd form.

OSMplanet

A dump of the comlete OSM database can be downloaded at http://planet.osm.org

To clip the data I usually resort to a very handy command-line tool called Osmium where you are able to define a region of interest and/or specify certain tags to filter the data by. The import into a PostGIS database can be performed by multiple tools, of which I prefere one called Imposm3 because of its speed. But still, when using it on the complete planet file with limited ressources (I used a dual core CPU and 8GB ram), it gets terrible slow and does not complete without error. My guess is, that the index which is created during the import procedure to access the data itselfe becomes so large that the access to it is not fast enough. For the extract of alone the continent of Europe, the index is more than 20GB.

Split Data

The logical thing to do is to split the data prior to import and use different databases for each part of the dataset. When using the data later on, one will have to apply whatever further steps are taken to each database individually, but still – when not using too many splits, this should be not a lot of hassle.

What would be more reasonable than to split by continents? Luckily the company Geofabrik already offers extracts of the planet file split by continents and countries, if prefered.

Dowload page for extracts by continent of the complete OSM database (http://download.geofabrik.de)

Dowload page for extracts by continent of the complete OSM database (http://download.geofabrik.de)

Instead of splitting the complete dataset on my own, I wrote a small script which downloads each continent on its own and then goes on to import each file in a unique PostGIS database. This procedere was quite fast for all countries, except Europe.

Still too Large

If you take a closer look at the metadata of each extract by continent, you can see, that Europe is by far the largest one. While the import of the other continents was done in a matter of hours, the one of Europe took over a day before my computer automatically logged off and on again. Sadly, I have no idea what happened in detail since I was not present when this incident occured.

But then I got another idea. I observed that the import procedure of Europe was happening at an remarkably slower rate than the other ones. What if there was some kind of internal timeout? The amount of system ressources used during the import did not change during the firt two hours. Assumin it dit not change during the complete import procedure, this can not be the reason for stopping.

The complete import procedure was done on a traditional harddrive. The PostGIS database also is located on the same HDD. What I did next, was to specify the -cachedir parameter of Imposm3 to use my internal SSD drive. This should speed up sequential access to the readout of the index which happens a lot during the import.

And so it happened! The import itself was about 10 to 15 times faster when using the SSD as storage medium for the temporary index. This is still not as fast as with the other continents, but still ways faster than before. The source file itself as well as the PostGIS database remained on the HDD. This was great news, since the index is generated only once but accessed millions of times. As far as I know, only write access to SSDs wears them down.

Points from all 8 databases visualized in QGIS

Points from all 8 databases visualized in QGIS

Border Regions

A problematic situation migth occure when paying special attention to the areas on the border between two continents. Since the sliced data overlaps for a certain amount, there is redundand information. In case for the “places” layer shown in the image above, I merged all input datasets by the “Merge shapes layers” SAGA command in the Processing toolbox of QGIS. After that I could apply a cleaning algorithm to the dataset. This might not be practical (or even feasable) with very big layers, so there has to be found another solution.

To avoid this problem, one could split the planet file into regions by oneself and care to cut sharply.

Recommendation

I can recommend using the “continent split planet” way of importing OpenStreetMap data when not having access to a big server with loads of RAM and cpu-cores. The hassel of e.g. reapplying the same cartographic design to each of the continents individually is within limits, since there are only 7 (in case of the splits 8) of them.

Book: Schwarzplan

(for more information look at the books website or directly buy it at epubli.de or amazon.de )

Do you know what a “Schwarzplan” is? In English it is called “figure-ground diagram” and is a map showing all built areas as black and everything else in white colour (including the background). Here is an example of such a map for the area of Paris.

Figure-ground diagram of Paris

Figure-ground diagram of Paris

From such a map you can gain insights into many different aspects of urban structures. So it is for example possible to recognize hints of the age of parts of the city, their kind of reignship, their history of development, suspected density and many more. Furthermore, figure-ground diagrams have an aesthetically appealing effect and strongly underline the differences between different cities when comparing them.

Up until now, there was no atlas focussing on this kind of map which was sad . This might be due to the fact that one would have to get costly data about the buildings for each city individually But thanks to the OpenStreetMap Project I can present to you the atlas of figure-ground diagrams, a book called:

Cover of Schwarzplan

Cover of Schwarzplan

“Schwarzplan” is an atlas displaying figure-ground plans of 70 different cities all over the world together with some basic statistical information about their populations and extent on 152 pages made out of high quality glossy paper.

You can take a more thoroughly look on the books website or directly buy it at epubli.de or amazon.de .

Android: OÖ Quiz

Since a week ago you can download the Upper Austria Quiz from the Google Play store: OÖ Quiz on Google Play

Mascot of the OOE Quiz

Mascot of the OOE Quiz

Intro

I always wanted to learn how to program mobile devices. During Christmas holidays the government of Upper Austria called for mobile apps using their OpenGovernmentalData. So, the chance was taken and Kathi and me started to design an Android app. Actually it is an hybrid app. This means that you develop your complete app as an HTML5 project and bundle it with a small server. The result is a valid android APK which behaves like a native app. The advantages are that the toolkit we used to generate the package (Apache Cordova) is capable of producing running binaries for a whole bunch of operating systems, including Windows 7, BlackBerry, Symbian, WebOS, Android and, if you can afford it, iOS. Sinc we can’t pay 100$ per year to be allowed to just push the Quiz to the AppStore, there is no version for iPhone.

Toolkit

Since we both own Android devices, the app is optimized for Android devices. Still, there were some difficulties. First, we had to decide on a framework. jQuery Mobile seemed to be a reasonable choice at that time since it is well known and I already are used to default jQuery. Next time, I will go with a toolkit supporting any kind of MVC architecture that avoides usage of HTML whenever possible. One of these would be e.g. Kendo UI. But we chose jQuery Mobile and stuck to it.

Challanges

The size of the toolbar at the bottom was a perculiar problem: While it was easy to adjust the size of the font depending on different screen resolutions of different devices, the toolbar would not change its display height.

Toolbar of the OOE Quiz

Toolbar of the OOE Quiz

We decided that despite its fixed behaviour it will be usable in any case, even when being very small and optimized it for the lowest screen resolution we could find. This means the app runs on even very small devices.

Map Display

The map background is variable in size and automatically adjusts to the display size.

Map view of the OOE Quiz

Map view of the OOE Quiz

This is possible because we used the Leaflet JS library which is a GIS capable mapping library for JavaScript. The data displayed in this case are GeoJSON objects. So, it is easy to modify and add new information to the map at any time. Even the questions and their right answer are actually GeoJSON objects!

Questions

The questions are generated out of OGD and OSM data. The most work was not to generate the questions which was mostly a case of combining a fixed phrase with a word describing a location, but to manually filter these questions for plausability. It just does not make any sense to let the quiz ask for a very small mountain which only a few people of Austria know by name themselves. The same goes with lakes – in this case we filtered by their area.

Further Thoughts

It was a fun project and I find myself playing the quiz which is a good sign. Kathi and me, we already have ideas to extend the OÖ Quiz and even have plans for a quite advanced second version with multiplayer support – it depends on our free time whether we will implement it or not.

Analysing the OGD Vienna fire hydrants for OSM import

The Dataset

The OpenData initiative of the city of Vienna (http://data.wien.gv.at/) has released a dataset about all fire hydrants of vienna. On the 17th of April it contains of 12702 unique entries and no additional attributes.

OGD Vienna: Fire hydrants

OGD Vienna: Fire hydrants

One can see that the area of Vienna is completely covered. The dataset itself seems to be quite accurate.

OSM Data

In OpenStreetMap, a fire hydrant is marked with the tag “emergency=fire_hydrant”. When filtering the OSM dataset of Vienna from the 16th of April the following hydrants (blue dots) are revealed:

OSM: Fire hydrants in Vienna

OSM: Fire hydrants in Vienna

The sum of all fire hydrants in the OSM dataset was 67. They are not placed as dense as for example trees, therefore it is easier to find out which of the fire hydrants in the OGD dataset should not be imported.

Preparing for Import

A quick analysis revealed that with a buffer of 20 meters, most of the already mapped fire hydrants (63) can be detected and omitted.

Comparing fire hydrants in OGD and OSM

Comparing fire hydrants in OGD and OSM (red: OGD, blue: OSM)

OSM-import of the OpenGovernment TreeCadastre of Vienna

Yesterday the import of the OpenGovernment tree cadastre of vienna was performed. The data used for the import is from the 27th of November 2012.

The OGD Cadastre of Trees prior to the Import

The OGD Cadastre of Trees prior to the Import

The final dataset which was used for the import can be found here (http://gisforge.no-ip.org:5984/datastore/osm/ogd_trees_wien_selected.osm.bz2) while the trees that have been omitted can be downloaded here (http://gisforge.no-ip.org:5984/datastore/osm/ogd_trees_wien_notselected.osm.bz2).

Changes to the Original Method

The method used is the one as is described in previous posts with the exception that the substitution of certain entries in the original dataset was done by the python script and not manually by the user.

As was proposed by Friedrich Volkmann, every entry that does not fulfill the following criteria is marked with a the tag “fixme=Baum oder Strauch“.

if height <= 2 and (int(datetime.datetime.now().year) - year) >= 3

This means that every entry in the cadastre of trees which is smaller than 2 meters but is older than 3 years is suspiciously small in size and should be checked manually.

Omitted Trees

As can be seen by the following image (area around “Urban Loritz Platz”), already mapped trees could successfully be removed from the dataset. No trees are mapped multiple times or are removed.

Already mapped trees in OSM are not imported

Already mapped trees in OSM are not imported (green dots are already mapped trees)

The dataset with omitted trees can be downloaded by the link provided at the top of this post.

After the Import

The most impressing thing about the import are the parks. While it can get a bit confusing, the many green dots give a good idea of the tree cover.

Trees in the Stadtpark

Trees in the Stadtpark

Another significant display of the imported trees is their position besides streets:

Trees along Streets

Trees along Streets in the “Cottageviertel”

The display of trees should provide the reader of the map a more thorough impression of the area he is looking for. But this might also just be too much information and make the map less readable. This is open to debate.

Interestingly, the linear structure of the imported trees can be used to detect poorly mapped streets. This is the case when e.g. a row of trees crosses the actual street which is only true in rare cases.

Poorely mapped streets exposed by the OGD tree cadastre

Poorely mapped streets exposed by the OGD tree cadastre

Next Steps

What has yet to be performed is the conflation of attributes of already mapped trees with the OGD cadastre of trees. On problem that arises here is that the provided reference number for trees is not unique. Maybe it never was intended to be unique or the data provided does not contain all information necessary. Either way, one has to perform an additional search by location to distinctly identify a tree.

Also, the OGD cadastre of trees is constantly changing because trees are removed or new ones are planted. So it makes sense to think about a method to automatically keep the OSM trees up to date with the OGD cadastre of trees.

Follow

Get every new post delivered to your Inbox.

Join 39 other followers