Thursday, December 24, 2009

Getting smarter... the Xmas query tool

The transition to use coverages at internal level in JGrass, to be more compatible with geotools slowly proves that the hard work done (and ongoing) is worth the price.

One example for this is what was known as the Raster Query Tool which was used to query grass raster layers, as shown in this picture:

Well, we now decided to make it a bit smarter, such that it would understand the underlying raster layer.

The Smart query tool can now be found here:

The following image shows a query done on a tiff layer:

But what makes me really happy is the way it is able to query layers that depend on time and depth, as for example a netcdf layer. The result look like this:

What happens is that the values are shown along the Y axis, while time and depth along the X axis. Well, in the case in which the dataset depends on both, time is shown along the X axis and a chart is created for every depth available. The charts can also be switched on and off for better understanding.

To be even smarter, we enabled the tool to work also on feature layers, by showing the attributes and some additional informations depending on the geometry type (such as number of nodes, centroid, area, etc):

What happens if several layers are selected? Depends on the layer types, anyways, the case of feature and coverage ones is not handled really well, it shows popups in a pile. For sure you get the info you wanted, you just have to browse it :)

So happy Christmas to everyone and a great great new JGrass year!!!!!!

Tuesday, December 8, 2009

Arcmap dbf import to BeeGIS

Recently we did a 3 days uDig-JGrass-BeeGIS course for some very nice guys. When they saw BeeGIS, they were very intereted and one question that came to the surface by one guy owning a handheld with arcmap, was if it would be possible to import the arcpad file into the database of BeeGIS, in order to exploit it to synchronise pictures on the map.

Well, since they really where that nice, we created a small import tool for BeeGIS.
You should know that from arcpad you can export a point shapefile, which has a big attribute table with all the infos in it.
The below is an example:

To use it just follow the steps:

1) Go to File -> import and select "Import Arcpad..."

2) In the following tab insert the path to the dbf file exported from Arcpad

After pushing finish the dbf is imported into the database's internal gps log, which is also use for the photo sync.

In fact if we have a look into the database view, we can see that the imported data are there:

Guess that will make it into the next build.

Saturday, December 5, 2009

EPL-GPL: Open letter to the FSFEurope

Since I have enough of not knowing exactly how to deal with the
EPL-GPL problem, I am going to write to the FSFEurope for

The letter I will send is placed here:

It will for sure change during the next days, but I plan to send it by
mid of next week.

Constructive comment are welcome!

New BeeGIS version available

It has taken a real long while, but finally a new BeeGIS version is out. It has a lot of enhancements and fixes that would take me too long to explain. One real important thing is that now we have a manual. Also now BeeGIS has not JGrass dependencies and as such can be installed into uDig directly.

For installation instructions have a look here.

You can download the manual here.

Friday, December 4, 2009

Moving a little step closer to OSM

It is a while that I wanted to create some tools to support Openstreetmap in BeeGIS in order to be able to upload digitalized field data to OSM.

Well, a little step was done now, which is a shapefile to osm exporter. As so often happens I didn't do anything great here, but I used to magnificent powers of open source and borrowed it from Ian Dees shp-to-osm project which luckily is written in java.

How to use it? It is wrapped together with dxf, dwg and kml import in the vector import/export plugin, which can be found here with installation instruction.

The rest is easy.

1) assume I have 3 layers of Points, MultiLines and MultiPolygons

2) File -> export -> Openstreetmap export

3) Choose the shapefile, and output folder

A bunch of osm files will be created in the output folder.

4) Open JOSM and import them

Since they are just points, you won't see that much, right?

But if I do the same with the line and polygon layer and then import all the osm files created, it looks prettier:

Next will be do the export directly from feature layer, which could be backed by any datastore, shapefile, postgis or whatever.

Wednesday, December 2, 2009

Is raster styling finally coming to uDig?

As JGrass gets more and more into uDig, we are starting to need decent styling for rasters that are based on physical data.
For example if you now load a geotiff that contains novalues into udig, you will get this:

For JGrass maps we always had our own styleing way, but now things will change, since we are porting everything to the coverage engine of uDig.
It is quite obvious that the above map doesn't help that much :)

Luckily we have a customer (that for now has to be hidden) that is paying for better raster styling.

The first tests are already so nice, that I needed to share them.

Well, let's go back to our geotiff.

If I open the style editor, now I am presented the typical JGrass raster styler. Since currently the sld file, if available, is not read, the panel of colorrules will be empty. There is a reset button that helps, by getting the extrema from the map and proposing a greyscale colortable:

Grayscale is already better than the first result, but I prefer colors...
Ok, there are a bunch of buttons to add rules and define their colors and range values by hand... but we are lazy, so for our lazy friends we made a chombobox of predefined colortables.

For example by choosing the elevation table, I will get:

The elevation colormap proposes a set of colorrules, equally interpolated between the extrema of the map. I can also decide to disable some of the rules and engine will ignore them producing:

Did you think it would produce a whole? No, it just interpolates between the active rules. But yes, you can produce a whole just by setting the opacity of the rule to some transparency, as for example to 0.5:

or maybe really to 0.0 to achive the whole?

And yes, the slider in the lower part gives general transparency:

Cool, what else can we do?
Do you use tecnical maps? Not? The ones with all the nice informations in black on white background:

well the greatest thing would be to be able to make the white part transparent, the black part white (or any visible color) and overlay that over an ortophoto.

Ok, so open the styleeditor and push reset to see what values are contained:

Sure, as we expected, it is a bitmap. 1 is something and 0 is something. So let us chose one of the proposed colortables:

Yeah, that now has the blacks part converted to white. To remove the black part, we can just disable the colorrule or make it transparent, remember?
Well, since the maps background is white, I will now propose a red drawing and transparent background:

Easy, right?

This has all been done on one band maps were this kind of colormaps make sense.
I hope to get that soon into uDig, since I have really missed this...

Tuesday, December 1, 2009

New CSV import interface

I know I should be writing a wrapup of Foss4G at Sydney and the the week afetr that GFOSS (the Italian local chapter), but coding is definitely more fun... uff

Well, at the last mixed uDig-JGrass-BeeGIS course we gave one recurring thing came out. People still use to have csv files and need to import them into GIS a LOT!!!
For those who do not know, CSV stands for comma separated values and is a text file that looks like:


So this morning I decided to add a new interface for that in JGrass. I hope it will be accepted quickly in uDig so that it goes into there.

How does it work now?

1) File -> Import and then choose CSV import

2) Browse for the file to import and choose the separator character/string.

The preview in the lower part helps you to see what is going on and also gives you the possibility to choose a type for the various fields, as well as the name. Remember that an X and an Y are mandatory.

3) Choose the coordinate system and push finish.

Nothing new obviously, but that was definitely missing in the uDig family.

Saturday, October 10, 2009

Some dxf/dwg screens

Just some screens taken from the import of dxf and dwg files.
Note that the dwg files are badly supported, they cover < Acad2000 and I think not every case..

Cadastral files are one good example that here is delivered only in dxf format. Seems to work well finally:

And an example of dwg, where due to the complexity of the different dwg internal types, I preferred to create 3 shapefile for evey type of data and set the layer name as an attribute, so one can select-copy/export the features to other layers.

Much has still to be tested. If you want your files tested, please send them to me, I would be glad to fix problems if I am able.

Friday, October 9, 2009

Dxf (and Dwg) in JGrass?

In our engineers job we get in tought much too often with dwg and dxf data. And much too often I told myself (and was told by my colleague Silli) that I should finally port the dxf/dwg import tool that the old JGrass borrowed from the gvSig community.

And here is finally the first step. The dxf import:

1) got to the usual import menu and find the dxf/dwg import

2) choose dxf import and insert the dxf file to be imported and the new shapefile to be created. Since dxf has no idea about projection, choose also a projection for the imported data.

3) push finish and wait. One layer will be created for every internal layer of the dxf file, with the data type that better suits the jts type. In this case I only had a road network:

Now have to test other types and files. Then it's dwg funtime (I hate those proprietary formats!). And then it all goes into uDig. Still have to think how though, since the license of this plugin has to stay GPL because of its prior licensing. Hmmmm....

Thursday, September 24, 2009

RAMADDA, take two: browsing the tree

Assume you have a Ramadda server with some groups created and some data uploaded, something that look like:

Want to access it programmatically and browse it? Here is the code to do so, shown in an example that traverses the repository tree and prints out all of the entries.

The main dump method:

* Dumps the tree of the repository.
* @throws Exception
public void dumpTree() throws Exception {
List postEntries = new ArrayList();
postEntries.add(HttpFormEntry.hidden(ARG_SESSIONID, repositoryClient.getSessionId()));
postEntries.add(HttpFormEntry.hidden(ARG_OUTPUT, "xml.xml"));
String[] result = repositoryClient.doPost(repositoryClient.URL_ENTRY_SHOW, postEntries);
if (result[0] != null) {
System.err.println("Error:" + result[0]);
Element response = XmlUtil.getRoot(result[1]);

// the root id
String entryId = response.getAttribute("id");
ClientEntry rootEntry = repositoryClient.getEntry(entryId);

dumpRecursive(rootEntry, TAB);

The recursive method and the toString method:

* Recursively traverses the entries and dumps its childs and subchilds.
* @param entry the start entry.
* @param tab the tabulator characters to use.
* @throws Exception
private void dumpRecursive( ClientEntry entry, String tab ) throws Exception {
List childEntries = getChildEntries(entry.getId());
for( int i = 0; i < childEntries.size(); i++ ) {
ClientEntry childEntry = childEntries.get(i);
outputStream.println(tab + entryToString(childEntry));
dumpRecursive(childEntry, tab + tab);

* Extract some base information from the {@link ClientEntry entry}.
* @param entry the entry.
* @return the string representation.
public String entryToString( ClientEntry entry ) {
String name = entry.getName();
String id = entry.getId();
Resource resource = entry.getResource();
String type = resource.getType();
return name + " ( id=" + id + ", type=" + type + ")";

And here the most important method, that gets childs from entries:

* Creates a {@link List} of {@link ClientEntry}s for a particular entry.
* @param parentId the id of the parent entry.
* @return the lsit of child entries.
* @throws Exception
public List getChildEntries( String parentId ) throws Exception {
String[] args = new String[]{ARG_ENTRYID, parentId, ARG_OUTPUT, "xml.xml", ARG_SESSIONID,
String url = HtmlUtil.url(repositoryClient.URL_ENTRY_SHOW.getFullUrl(), args);
String xml = IOUtil.readContents(url, getClass());
Element root = XmlUtil.getRoot(xml);

List childEntries = new ArrayList();
List children = XmlUtil.getElements(root, "entry");
for( int i = 0; i < children.size(); i++ ) {
Object object = children.get(i);
String id = XmlUtil.getAttribute((Node) object, "id");
ClientEntry entry = repositoryClient.getEntry(id);
return childEntries;

The result launch on the above repository is:

morpheo ( id=02c47d07-6f7d-473b-b104-b922348f7d51, type=unknown)
documents ( id=660cb2ce-99f5-4d4b-9682-2f408c2b105e, type=unknown)
client code ( id=9977cbc1-2589-49c8-b7e0-9b944970d169, type=storedfile)
call_4_papers.odt ( id=33a026f4-87f3-4c78-8c1d-d14da597aa06, type=storedfile)
rasterlite-how-to ( id=0a138888-b5ee-42bf-be3e-f78c8c3a5232, type=storedfile)
hydrologis logo ( id=209c182d-e22c-48b2-933a-8d03ab4b72d9, type=storedfile)
netcdf ( id=91b9a338-d68e-4411-bdd3-b3e5a3f111c6, type=unknown) ( id=3a74f349-a4bf-4f9d-916d-83f4511e0877, type=localfile) ( id=400a17cc-f33c-40f4-a896-b2bffbed2f84, type=storedfile) ( id=6b69040b-7a96-46dc-95b2-0d6ae044f948, type=storedfile) ( id=eded7693-8b6e-4941-87f5-2fc9bb2a6bb7, type=storedfile)

Wednesday, September 23, 2009


You might ask yourself what ramadda is, and I can tell you that the Repository for Archiving, Managing and Accessing Diverse DAta is awsome!

It is really a while that I search for a way to store diverse data, most of them grids, in a repository where the metadat would be preserved and in case even editable. A place where one could extract subsets of data from the datasets. A place where one could access the data also directly from the GIS... yes, I know, everything is thinking about OGC and some WCS and whatever else.
But the summer of code opened me a pandora box full of presents and by choosing the necdf format many possibilities built on open sourced software are available.

Ramadda can be tested on the unidata's demo server, so I won't talk about the many features that one can try out there.

I want to talk about background jobs that can be done from the client code of Ramadda, which Mr. Jeff McWhirter was so kind to introduce me to and help me with.

First, to get started, you need the repository client library, which you can download here. Once the contained libraries are in the classpath, you can start.

Let us assume we start from an existing ramadda repository, that looks like the following:

Let's assume we need to upload a netcdf file that has been produced in JGrass by some model.
The following shows the code needed to do so.

First a client connection has to be instantiated:

RepositoryClient repositoryClient = new RepositoryClient(host, Integer.parseInt(port), base, user, pass);
String[] msg = {""};
if (!repositoryClient.isValidSession(true, msg)) {
// throw some exception

where the isValidSession method in this case also performs a login in the ramadda environment.

Once the connection is done, the upload of any file can be done. The following method does that for you:

* Uploads a file to ramadda.
* @param entryName name of the entry that will appear in the ramadda server.
* @param entryDescription a description of the entry that will appear in the ramadda server.
* @param type the file type that is uploaded.
* @param parent the parent path inside the repository, into which to upload the file
* ex. morpheo/documents, where morpheo is the base group and documents
* the subgroup.
* @param filePath the path to the file to upload.
* @throws Exception
public String uploadFile( String entryName, String entryDescription, String type,
String parent, String filePath ) throws Exception {

Document doc = XmlUtil.makeDocument();
Element root = XmlUtil.create(doc, TAG_ENTRIES, null, new String[]{});
Element entryNode = XmlUtil.create(doc, TAG_ENTRY, root, new String[]{});

* name
entryNode.setAttribute(ATTR_NAME, entryName);
* description
Element descNode = XmlUtil.create(doc, TAG_DESCRIPTION, entryNode);
descNode.appendChild(XmlUtil.makeCDataNode(doc, entryDescription, false));
* type
if (type != null) {
entryNode.setAttribute(ATTR_TYPE, type);
* parent
entryNode.setAttribute(ATTR_PARENT, parent);
* file
File file = new File(filePath);
entryNode.setAttribute(ATTR_FILE, IOUtil.getFileTail(filePath));
* addmetadata
entryNode.setAttribute(ATTR_ADDMETADATA, "true");

ByteArrayOutputStream bos = null;
ZipOutputStream zos = null;
try {
bos = new ByteArrayOutputStream();
zos = new ZipOutputStream(bos);
* write the xml definition into the zip file
String xml = XmlUtil.toString(root);
zos.putNextEntry(new ZipEntry("entries.xml"));
byte[] bytes = xml.getBytes();
zos.write(bytes, 0, bytes.length);
* add also the file
String file2string = file.toString();
zos.putNextEntry(new ZipEntry(IOUtil.getFileTail(file2string)));
bytes = IOUtil.readBytes(new FileInputStream(file));
zos.write(bytes, 0, bytes.length);
} finally {

List postEntries = new ArrayList();
postEntries.add(HttpFormEntry.hidden(ARG_SESSIONID, repositoryClient.getSessionId()));
postEntries.add(HttpFormEntry.hidden(ARG_RESPONSE, RESPONSE_XML));
postEntries.add(new HttpFormEntry(ARG_FILE, "", bos.toByteArray()));

RequestUrl URL_ENTRY_XMLCREATE = new RequestUrl(repositoryClient, "/entry/xmlcreate");
String[] result = repositoryClient.doPost(URL_ENTRY_XMLCREATE, postEntries);
if (result[0] != null) {
outputStream.println("Error:" + result[0]);
return null;

Element response = XmlUtil.getRoot(result[1]);
String body = XmlUtil.getChildText(response).trim();
if (repositoryClient.responseOk(response)) {
outputStream.println("OK:" + body);
} else {
outputStream.println("Error:" + body);

Element child = XmlUtil.findChild(response, "entry");
String entryId = child.getAttribute("id");
String urlString = "http://" + host + ":" + port + base + "/entry/get/" + entryName
+ "?entryid=" + entryId;
return urlString;

This method returns a url string, that can be used in the browser to fetch the uploaded dataset.

So if I was to upload a netcdf called without supplying a particular path, it would have resulted in appearing in the base group, called morpheo in this case:

What I really love about ramadda, is that if the dataset is in netcdf format for example, the metadata are accessible and editable also from the web interface:

There are several ways the data can be accesses, and note in the below picture the opendap link, which is the one the JGrass uses to visualize the dataset or use it in the models:

Ok, but I was writing about programmatically access the data. So how to fetch data from the server? They can be downloaded from ramadda by means of their id. In fact the last part of the above returned url string represents the unique id of the dataset (...?entryid=THEENTRYID).

Downloading a file from ramadda is incredibly easy, since the RepositoryClient class sopplies a method called writeFile that takes the entry id and the output file to which to download as parameters:

repositoryClient.writeFile(entryId, outputFile)

And that is all, thanks to Jeff McWhirter for all the great help he gave me.

Soon I will be glad to describe some deeper integration between JGrass modeling environment and Ramadda.

Wednesday, August 12, 2009

The new navigation view...

Created for udig, for now used to browse netcdf files along a temporal axis... and since a screencast is more than many words... enjoy (as I did :) )!

Friday, August 7, 2009

Time and Z navigation internals are here

Another bit of netcdf project is done. This time it was creating the internals of navigation... more about it on the project page.

Just a snappy to make this blog not look so annoying :)

Wednesday, July 29, 2009

Netcdf export engine and GUI finally done

I finally managed to finish the export wizard for netcdf files. The problem for the first time for me was that I had to limit very carefully what could be done during export... if you are interested in a tour through the export process, I

Thursday, July 23, 2009

Enhancements in the print editor of udig

During the udig code sprint I worked a lot on the printing module, that needed some love. I made screenshots to give a description. But now I am really done from preparing the release for tomorrow, that I don't feel like... hope you will enjoy those quite selfexplanatory screens:

And the resulting A1 landscape pdf...