High resolution screenshots (SOLVED!)

Ever want to take a screenshot at higher-than-your-monitor supports resolutions? I do! I laser print quality, e.g. 300-600 DPI (dots per inch) not a measly 70-90DPI like my screen supports. So if aiming for a 10″ wide print out, I’d need a 3000 pixel wide image. I don’t know about you but my monitor hasn’t gotten up to 2000 yet 🙂

I found (at least) two particular approaches that I am using now to help with this task…

I’m hoping you find this post if you are searching for this kind of solution because I found a hundred web searches that were *not* helpful or really even on this topic. What do you search for anyway? “screenshot virtual desktop” gives you a good start but you’ll get a weird mix of unrelated stuff for sure. Heck one proprietary vendor told me their product wouldn’t do it, but they also thought it really wasn’t possible. Oh my. Hint, you want virtual desktop panning or scaling.

Here’s what I found works on my Linux box, but both likely will work on other platforms as well if you have suitable hardware (and if Microsoft and Apple give you the options)…

#1 – NVIDIA X Server Settings

My NVIDIA graphics card is good enough to take on the task, yours should be too. Testing on my Red Hat Enterprise Linux (RHEL6) Desktop environment, I launched the command nvidia-settings (as root) and selected the X Server Display Settings. There is an advanced button that allows you to specify a resolution for panning. I set mine to 3000×1050 just for fun, logged out and back in to reset X Server settings and was instantly in action.

The result is that you have a huge desktop and to move around in it you slide your pointer off the edge of the screen. Similarly when you want to take a screenshot, just start up your app, maximize it (you’ll see a portion of it!) and launch your screenshot app – in my case KSnapshot is my friend. I tell it to capture the whole window and… it just works. I did find a faint ghosting effect where the KSnapshot dialog was still showing a bit on top of the window I captured – I think it’s because it takes a bit more time and energy to handle the expanded graphics – give yourself a few seconds to work with instead.

TODO: Learn X Server conf stuff enough to also enable scaling so I have a high resolution desktop that still fits on my screen – though ultimately only show 72DPI but that the system can capture at full res when needed. I looked at the Xephyr project too and it seems similar but I think it’s probably overkill for me at this stage unless I’m doing screenshot remotely.

#2 – VNC / Remote Desktop

Panning around the screen in solution #1 might not be an option for those still using mommy’s netbook for their work.. so here’s your alternative. You may even find this preferable to the above because scaling works well, at least in my VNC client app. (Hint, VNC is worth learning about if you don’t already know about it, it’s basically what Windows, Mac and even RHEL term Remote Desktop computing.) In Linux it’s simple, start a VNC Server session and a copy of your desktop starts up virtually. You don’t see anything until you connect to it using a VNC (or Remote Desktop) client program – you can connect locally or remotely through your network.

In my case I installed TightVNC (I think), but it likely doesn’t matter, just find a VNC Server and Client package to try out. I ran the command vncserver -geometry 3000×1050 – and I was off to the races. Then, even using the same computer and desktop, I ran the Remote Desktop Viewer (filed under Internet apps in my menu) and enter in my address and the display “port” number that the vnserver app mentioned after starting up – e.g. localhost:1. I also have an option in my connection dialog to turn on scaling – yes, try it, you’ll like it!

The result is really cool, nothing stupendous but will solve all your woes with high res screenshots. Need to see what a website looks like at a higher resolution than your papa’s Dell laptop supports? Give this a try. No promise on Windows and Mac as they often mess with what you can do with virtual desktops, but if a VNC Server can be run, then it just might also work.

I’m sure some off the shelf solution exists – but I haven’t found it. What I would ideally have is a tool on my desktop that I can click on – switching to ultra hi-def mode and saving a screenshot and then dropping me back to normal resolution. Should be doable – holler when you need a beta tester!

One app I will be doing some screenshots on is Quantum GIS – it will certainly be interesting to see how quickly my WMS layers fail to load at higher than normal screen resolution… so I’m sure I’ll be blogging about tilecaching sometime in the future too 😉

Tyler Mitchell
12 Nov 2011

p.s. I wrote this agggges ago – if it works for you, please comment so I know it’s not something I should address/update.

State of the Bounty?

I’m regularly asked about how users or companies can request improvements to OSGeo software or to make a specific bug go away. Short of pointing people to the ~200 OSGeo Service Providers (http://osgeo.org/search_profile) and helping them get onto the right lists to ask for help… what else can I do? Naturally some companies are more active than others in terms of providing code-level support like this, yet I know there are even more out there.

Trying to bring the two groups together through a bounty program (fee offered for a specific feature/bug solved) is always controversial. Yes, I have heard all the for/against arguments already 🙂 but I’d still like to see who out there is already doing it in this space.

So my question tonight is twofold:

1) What GFOSS or OSGeo projects have looked a bounty system and are either implementing it or hope to implement it. If you are offering up suggestions for “pay for this great new feature”, that might apply as well. I’d love to know how you are implementing it.

2) If you are a service provider or developer on GFOSS/OSGeo platforms, would you be interested in being connected to potentially paying customers (hmm, hope you answer yes so far) that have particular requests to the base.

I’m more curious than anything and wonder what we’ll find after scratching the surface! I know some local chapters have even discussed the ideas for supporting their favourite packages, etc. Please let me know if you have anything to share – I’ll summarise and re-share the results in a few days.

An OSGeo Marketplace?

I’ve mentioned before about the OSGeo Service Providers Directory. It’s nothing fancy but quite useful, especially if you’re trying to get a feel for the ecosystem of providers out there. Currently there are 196 providers registered (are you?) for 25 software projects. I haven’t done the country or language calculations, but I took a look at the cumulative number of service providers saying they support a given GFOSS project. I’ll tidy it up eventually, but provide a rough graph for your viewing pleasure below (Average 43, Max 150, Min 2).

That’s pretty astounding for me to see. Consider what it would have been 5 years ago before anyone even started collecting this data.

Now what I’d like to know is how many of those providers are VARs vs developers vs committers. Ohloh.net may help identify, for example, core committers. But it’s more of a challenge to make the connection between developers that can provide code-level support (i.e. bug fixes, new features, as hinted at in my previous post) and those user/customers who could use the help.

I wish you could group ohloh users into companies, or that we could have a flag in the directory for the various levels of service one provides. A good project to start digging into for the next rainy day.

Updated graph included fixed JUMP/OpenJump stats mistake.

Installing GeoServer on OSX (again, after 4 years)

A few years back I wrote up my experience installing GeoServer on my Macbook. I thought I’d take another stab at it and see how it goes on a newer Mac, this time running Snow Leopard (10.6.8 Macbook pro).

I’m sure some groups are packaging GeoServer into some sort of simple to install packages that I should know about, but not being daunted by the “hard way” I’m following my new user approach to getting through this: pretend I know nothing. First stop – http://geoserver.org. Then I clicked my way through Download, Stable version and selected Mac OSX Installer – imagine that! My confidence is buoyed as I feel I’m on the right track.

This is brilliant, guys, you couldn’t have made it easier. I only wondered if I should have grabbed the OS Independent download instead, but I won’t test fate by deviating from the obvious at this late hour.

Now that the geoserver-2.1.1.dmg file (56MB) has been downloaded from Sourceforge, let’s see what we got. Can it get easier than this?

GeoServer OSX Installer Package
I dragged the GeoServer icon onto the Applications icon but got an error about the Applications alias. Rather than fiddle, I opened a new Finder window and dragged it directly to the Applications folder – no problem.

Now, what do we get when we launch GeoServer from the Applications icon?

After select the Server -> Start menu it throws a few log lines up the console window and announces that GeoServer is started. Right away my web browser pops up with the GeoServer Welcome Page:

A successful, and very easy, install. You should now have a look at the GeoServer docs to get started with the real utility of the application. Good work GeoServer dev team, I was thrilled to not touch a war file or go hunting for a Tomcat install this time!

Multilines to Polygons with GRASS and Quantum GIS

Tonight I had a set of GPS forestry block line features to convert into polygons for a friend. The file had been created using a merge tool in QGIS and incidentally made multiline features. That might have been fine, until we wanted to convert them to polygons. The ftools vector conversion in QGIS broke the polygons at the start of each new line feature. Hmm…

Time for topology tools. I created a simple mapset using the GRASS tools in QGIS, imported the line_in.shp into a new GRASS layer. I then used the snapping generalisation tool with a 5m tolerance to make sure my line features “closed” properly for polygon building. Then I used the line to boundary conversion tool to get the final step done. After loading the layer into my view, I was able to save it as a new shapefile for sending on to the client.

Bang, done. In 20 minutes, including trial and error, I was done. That was quicker than it took me to try to write up the problem for QGIS bug tracker 😉

Here’s a quick (and rough) video showing what I did (even a couple mistakes). Go full screen to see the detail at full res.

Open Data in BC? TRIM Data – Open It Up!

The British Columbia government’s Terrain Resource Information Mgmt Program – (TRIM) mapping program of the late 1990/early 2000’s was one of the first GIS projects I worked on (from a distance). With a comprehensive aerial photography collection programme in place (and awesome guys in Victoria managing it all), we held out breath for several years waiting for good enough weather to finish our Cariboo Region photo coverage. (I’ve stopped cursing the mountains now)

Watching my colleagues digitising from photos and seeing the huge contracts being let to photogrammetry outfits was mind-numbing at best. The sheer volume of data was astounding and the detail (at a 1:20,000 scale) was superior to anything I had ever seen in a GIS. Of course the orthophotos and elevation models were the best eye candy, but the detailed attribution of vectors with innumerable feature codes was really cool. Sure there were limitations, but if I recall correctly, it was limitations with the GIS at the time, not the data. We spent a lot of time working with Safe Software’s FME to arm wrestle this info into submission – but it was actually kind of fun 🙂

Private users had to pony up some good cash to get access to the data – and at the time it seemed well justified as copying DVDs of data around was still new. FTP transfers were still the norm, but you needed a Data Sharing Agreement with the Province to get that level of access. In essence you had to be big business (or working for them) doing work related to government resources (forest management, utilities, etc.) to get that level of access.

All that to say, not too long ago a Few Good Men stood up a WMS to share the TRIM resources (using MapServer, I do believe). This revolutionised my little universe. As a GIS consultant in forestry it meant we could do our job a lot better. For example, no more waiting for DVDs or FTP transfers. But most of all it meant quick access to data we needed on a daily basis – not having to make a choice based on whether it was worth the bother to ask for the data, instead we just used it! Once ArcGIS finally supported WMS, we were able to pull up some background photos and just get the job done reviewing/editing our clients’ data.

How much did we help reduce backlogged data access requests by using this service instead? With this more efficient distribution I thought eventually the data would just be blown open and we could start getting easier access to lower levels of the vector data we really needed (rural roads, elevation models/contours, etc).

So when I saw the announcement this week about the release of a new Open Data policy and catalog at the BC Government, you can guess what I went searching for first.

In short, my searches for elevation, terrain, and contour all provided unrelated results. I did found one for “TRIM” and it sounded very promising: “Topography, Planimetry, Elevations, and Toponymy (place names) for all of British Columbia” – doesn’t that sound like pretty fundamental geodata? But it was precisely the same data source I’ve been using for years, available in KML and WMS only.

Reviewing the metadata shows business as usual (aka “cost recovery”):

I’m not going to judge this “open data” move as pure hype, because I don’t think it is. I assume there are many, many other great datasets released this week – I will enjoy going through more of them eventually. But so far I’m disappointed that this foundational set of base mapping data is still truly locked up. Please open it before it’s so out of date that it’s not usable!

Thanks again to those who got us to having WMS access – my recommendation: put them in charge of getting us vectors or some sort of raw data access and they’ll find a good solution.

Next up, I’ll check on the state of VRI & Forest Cover – the inventories of one of our great Crown timber assets. Are they open now? Or not?

(Hit me on twitter: @spatialguru with your comments)

3D Rendering in the Cloud – Revit Architecture 2012

I’ve been learning Autodesk’s Revit Architecture 2012 along with my kids and we are really enjoying it. We’ve used the open source Blender project and Sketchup a lot in the past but wanted to stretch our wings a bit more seriously.

With architectural smarts, Revit’s really fun to use actually. You don’t just draw masses and meshes (though you can!) and extrude standard shapes. Instead you deal with floors, walls, ceilings and you place components (desks, people, cars) and define materials, costs and more.

The built-in Seek family search tool gives broad access to detailed content, but I also use revitcity.com and you can also import Sketchup models. Sky is the limit really. By the way, it’s not just about 3D visuals, but about the infamous BIM: planning, scheduling, parts lists and more.

Enough intro… (it’s worth a try if you like this kind of stuff).

A labs.autodesk.com project called Neon recently came out that allows you to easily add a toolbar item for sending your projects to a cloud service for rendering. Neat idea, so I gave it a try tonight.

My little study project is a marina design. To render it locally in “best” quality mode took me just over 16 minutes.

To have the web service do it took 1 minute to “upload” and only 45 seconds to do the render (with similar settings). But it did sit in the queue for quite a while – I’m guessing I didn’t get all 64 of their cores working on my project, but it was interesting. I ran several tests earlier that got much better times, I’ll have to dig into how to optimise for performance at some point I guess. It would be nice to see if you are stuck in a queue or not though – that’s my only complaint.

You also get a nice online catalogue of sorts so you can review past renders and easily access them elsewhere. I can go out of the house and see what the kids might have been working on!

Good work on the Neon labs project. Two thumbs up!

A New Venture: Book Publishing

Behind the scenes in my spare time, I’ve been picking away at getting the structure in place for publishing new books with a particular focus on open source geospatial projects.

We are getting close to rolling out our first two books – you can read more information on our motivation and read our invitation for joining us in a recent blog post on the Locate Press site:

http://bit.ly/hMVxo3

Hope to hear from you about books you or your clients, users, students need!

p.s. This is really a side-job experiment done in my spare time – rest assured, I have no desire to quit my day job! 🙂

Ready, Set, FOSS4G!

Close to 300 presentation abstracts submitted! Stay tuned.

Inkscape Batch Mode – Convert SVG to PNG tip

I’m so used to using ImageMagick’s convert command that I felt somewhat cornered when I couldn’t get a decent SVG->raster conversion. I do all my SVG work in Inkscape which can output raster files directly, but I was looking for a batch solution.

Thanks to this ubuntu forum post I realised Inkscape has a batch mode – Imagine that foresight! Bravo. This snippit did the trick, including resizing:

for i in *.svg; do inkscape -f "$i" -e "${i%%.svg*}.png" -w 640x; done

I imagine there are a bunch of other command line options but this was the first I saw, or thought, of them. That’s what happens when we use GUIs too much. I’ll now switch back to using Turtle from the command line for my graphics!

%d bloggers like this: