My Works, Technology, WWW

Draggable directions with Google Maps APIs

I love calculating driving directions on Google Maps and then drag the blue line marking my directions to change the route. Everything is updated automatically on the page and the directions are re-calculated to go through the new point I defined.

I’ve been playing around with the Google Maps APIs recently and imagine my disappointment when I found out that Google does not allow directions calculated through the APIs to be dragged around.

Not put off by this I decided to try and replicate the directions-dragging myself. How hard can it be?
As it turns out. Very, at least if you want to make it look as smooth as Google’s own solution.

When generating directions Google Maps adds a GPolyline element overlay to your map. You can set the returned line to be editable but this makes an awful lot of vertices appear on it, which makes reading your directions quite hard. Even so, once you dragged one of this vertices around you are not editing the route but just changing the shape of the line.

Mine is not a complete solution and I’m interested in feedback and ideas on how to improve it.

First off calculate your directions:

map = new google.maps.Map2(document.getElementById('map_canvas'));
var wayPoints = [];

var myDir = new google.maps.Directions(map)
myDir.loadFromWaypoints(wayPoints, { travelMode: G_TRAVEL_MODE_DRIVING });

This will calculate your driving directions and plot a GPolyline on your map. The line is easily accessible in the GDirections object once the directions are calculated. To intercept this I have decided to use the addoverlay event on the GMap object. This event is triggered every time something is plotted over the map (says on the tin).

google.maps.Event.addListener(myDir, "addoverlay", function() {
var dirLine = myDir.getPolyline(); // Get the polyline from the directions object

At this point we can ask the APIs to make the GPolyline editable. This will make the vertices appear on the line and make then draggable. As I said before this only changes the shape of the line and doesn’t actually affect your directions object. Luckily the GPolyline comes with a nifty event called lineupdated.
This is triggered once the user has finished dragging a vertex. By intercepting this we can look through the vertices and know what’s been changed on the line and where the vertex has been moved to.
In order to do this we must also know the previous position of the vertices (latitude and longitude) to be able to compare the old and the new “edited” line.
Another challenge is the fact that the GDirections object can accept only so many waypoints (25 if I’m not wrong). Which means we’ll have to add only the vertex that has changed to the directions and not all of them.

// In the addoverlay event also save the original vertices of the line
var origLine = [];
for (var i = 0; i < dirLine.getVertexCount(); i++) {
// DONE saving vertices

// Now intercept the lineupdated event and add the new waypoints
google.maps.Event.addListener(dirLine, "lineupdated", function() {
routePoints = [];
for (var i = 0; i < dirLine.getVertexCount(); i++) {
var savedPoint = origLine[i];
if (!savedPoint || ( != dirLine.getVertex(i).lat() && savedPoint.lng() != dirLine.getVertex(i).lng())) {

// Now we remove the previous directions and recalculate the route

This works quite well but does not look as smooth as Google’s solution.
The problem is that while you are dragging a vertex only that bit of the GPolyline moves and the rest stays in its original position. which makes the shape of your directions quite awkward while you are dragging. Unforunately the GPolyline does not come with a “startdragging” event, otherwise we could just recalculate the route every few seconds while the vertex is being dragged.

This is not the most elegant of solutions but it does the job.

My Works, Technology, WWW

SVG graphics with JavaScript

When I started developing TweetSentiment I decided that the interface should have as little text as possible. Most of the information I was interested in could be displayed graphically, with a chart.

So I looked at all the options available for chart generation.

  1. Backend code to generate a static image (JFreeChart or PHP)
  2. Flash object to draw a chart retrieving the data from a URL
  3. Draw charts in JavaScript directly on the client’s browser

I’m not a huge fan of option one. Primarily because TweetSentiment is hosted on a tiny linux box which would not be able to handle the load for the traffic the site gets, also because it’s a static image – it’s just not very funky – no interaction possible.

Option two would certainly create spectacular looking charts but I have almost no experience with flash and I wasn’t about to start learning a new language/technology. Plus I’m not into browser plugins if I can avoid them.

JavaScript is a language I’m familiar with and I remember seeing some cool-looking charts generated with Dojo. Unfortunately for TweetSentiment I have used jquery since the most important thing for me there was DOM manipulation (and jquery is just better for that).
I then started shopping around for jQuery plugins to generate charts. There are a few around but none of them impressed me. They just weren’t as good looking as I’d hoped nor they were interactive.

By coincidence I stumbled on RaphaelJS. A JavaScript library to draw Scalable Vector Graphics directly from JavaScript based on jQuery. I tested the samples on the website with a few browsers and I was happy to discover it worked just fine with all of them.

Scalable Vector Graphics (SVG) is a family of specifications of an XML-based file format for describing two-dimensional vector graphics, both static and dynamic (i.e. interactive or animated).

The SVG specification is an open standard that has been under development by the World Wide Web Consortium (W3C) since 1999.

I also discovered that there is a charting library built on top of RaphaelJS, which is exactly what I was looking for. However, being a geek, I decided to go ahead and try to develop something on my own. You know, just for kicks.

As I delved deeper into RaphaelJS I found the library to be incredibly powerful. It’s a shame that the documentation provided on the website lets it down a bit.
The most powerful bit is the ability to extend objects and attach new functions to them. Something scarcely mentioned in the available documentation.

For example if you need to use curved lines (paths as SVG calls them) you can just defined a default function you can then call from your code simply by adding it to the el “object” in RaphaelJS

Raphael.el.curveTo = function () {
  var args =, 0, arguments.length),
  d = [0, 0, 0, 0, "s", 0, "c"][args.length] || "";
  this.isAbsolute && (d = d.toUpperCase());
  this._last = {x: args[args.length - 2], y: args[args.length - 1]};
  return this.attr({path: this.attrs.path + d + args});

Another very useful function I found in one of their samples is the andClose() This is used to close a polygon you have started drawing with paths. No matter where you got to it will reconnect to the initial point.

  Raphael.el.andClose = function () {
  return this.attr({path: this.attrs.path + "z"});

This can then be used this way using chaining.

RaphaelJSElement.lineTo(x, opts.height - bottomgutter).andClose();

I’m still developing the chart library I used in TweetSentiment and I’m planning to publish it here with some documentation under MIT licence.

Technology, Thoughts, WWW

JavaScript games – more thoughts on O3D

This week I wrote a post comparing O3D and WebGL.

Today I have finally spent some time playing with O3D and managed to implement some very simple applications.

Now that I have a clearer understanding of what O3D can and can’t do I have given some thought to the possibility of writing videogames in JavaScript. As I mentioned in my previous post I can’t see myself playing something like Fallout in a browser window. Nonetheless I can imagine simple multiplayer games, something like Monopoly or Risk, working this way.

I have developed quite a few JS applications that allowed users connected at the same time to interact with each other. It’s very simple, constant AJAX posts and gets with a server keeping the state of the interaction. Imagine something like GTalk integrated inside GMail.

This is all well and good when the interaction is limited to a few chat messages or coordinates of the mouse pointer on the screen, but multiplayer videogames have to shift a massive amount of data every second. When you play Gran Turismo online the position, speed and state of each player’s car must e synched across all the participants as often as possible. Add chat/voice data to that and you’ll soon realise that 30 players for one game calling your server at the same time to get and post data is just not manageable. Furthermore to ensure the timely delivery of the data to each client you are much better off pushing the data to the client rather than relying on it to call your server.

What O3D should add to its APIs is a DirectPlay alternative. Multiplayer support built straight into O3D. This way your JavaScript game will be able to establish peer-to-peer communication between all the clients without having to stress your servers. Simple socket communication giving the developers the ability to push data between all the peers connected.
Network support by being built inside the O3D plugin could also deal with all the annoying connectivity issues such as “punching” through NATs.

Without properly implemented network play I don’t think we’ll ever see 3D games flourish in your browser window.

Technology, Thoughts, WWW

WebGL and O3D

You may have read recently that Khronos is implementing something called WebGL. The objective of the project is to expose all of OpenGL ES calls to javascript. Thus allowing hardware accellerated 3d graphics within a browser.
Google has also been working on an alternative, called O3D.

Let’s first talk about the technical differencies between the two projects.

O3D and WebGL while both trying to bring accellerated 3D graphics to the web have taken two fairly different courses. As I mentioned in the introduction to this post WebGL’s plan is just to expose to JavaScript OpenGL ES 2.0 APIs. Whereas Google’s solution is based on a browser plugin.

If we think about this we’ll soon realise that WebGL depends entirely on JavaScript. JavaScript, as of today, is a fairly slow language. This point was made in a discussion thread on the O3D project website.

WebGL, being 100% dependent on JavaScript to do an application’s scene graph, is going to have serious problems drawing more than a few pieces of geometry at 60hz except in very special cases or on very fast machines. This means WebGL requires JavaScript to:

*) do all parent-child matrix calculations for a transform graph.

*) all culling calculations (bounding box to frustum or other)

*) all sorting calculations for dealing with transparent objects.

*) all animation calculations.

As an example the kitty demo in O3D is doing linear interpolations on 2710 floats to animate 170 transforms. The point is not that the artist that created the kitty should probably not have used 170 bones. ;-) Rather the point is it seems unlikely that JavaScript
will be able to do that anytime soon and if it can then just add more than one kitty to pass its limits.

Also we have to keep in mind that not all hardware supports OpenGL ES.

O3D, By virtue of being a browser plugin written in C++, so an additional (hopefully fast) abstraction layer on top of the GPU, allowed Google to define a new set of APIs to expose to JavaScript and keep us (the JavaScript developers) away from the hardware. O3D will take care of the interaction with either DirectX or OpenGL.

Furthermore Google has open-sourced O3D through its Google Code website. Which means we can all have a look at their code and participate in the project. This resulted in a lot of documentation being available. For a full overview of how O3D works check out the technical overview on the O3D project page.

Do you think that this is the making of a new “Standards war”? Both Google and Khronos are adamant that they are not competing. However I believe that ultimately only one project will come out as a standard. As the complexity of 3D web applications increases it is not feasible to write code for both “APIs”. The only question for me at this point is who will come out on top.

To answer this I would look at the audience of the two projects. OpenGL has been out in the wild for a long while and many developers of videogames, or general graphic application, are already familiar with the APIs and the way it works, therefore it would probably make sense for them to embrace WebGL.
Nonetheless O3D still stands a chance. For a very simple reason. It’s the web we’re talking about.

Frankly I can’t see myself playing a big videogame like fallout in a browser window anytime soon. These APIs will be used to enrich web application. Some examples are already coming out using O3D. Have a look at this Home Designer. Can’t you already see IKEA using it.
My point here is that we’re not likely to see game developers switch to the web. We’re much more likely to see web developers start working on games or application involving 3d graphics, and this is where Google wins.

O3D extends application JavaScript code with an API for 3D graphics. It uses standard JavaScript event processing and callback methods.

As a web developer I can keep writing JavaScript code as I’m used to without having to change the way I think to how a game developer does.

What do you think?

General Babbling, News, Technology, Thoughts, WWW

Google tackling online storage

After much anticipation and hype the Gdrive seems to be on its way, or so the WSJ reports.

A Google spokeswoman declined to comment on any specific online storage plans beyond what it already offers as part of its email and other services. But she said in a statement that “storage is an important component of making Web [applications] fit easily into consumers’ and business users’ lives.”

Most companies, from small businesses to big giants are moving their environments online to make documentation/presentations or whatever else may be needed available to their employees, wherever they may be whenever they want.

As I said in a previous post Google is pushing its online productivity suite and a shared online storage could definitely give an additional boost to the entire system.
The online storage is one of the few reasons why I use .Mac, the second rationale behind the choice is that the interface is just brilliant, the iDisk is mounted as a file system and directly accessible from my Finder.

In my opinion if Google really wants to make the Gdisk a must have for small/big businesses a client software to access the data is vital – not because it works better, but because it is a step final users have to go through to get used to online storage solutions. Most people don’t, and won’t for a while, use Writely or Google’s new PowerPoint-ish software – they’ll keep creating documents in their local environment and the sensation of accessing a local drive to save their work will make them feel somewhat more secure.

For its office components to attract big businesses Google still has do a great deal of work on the corporate accounts handling side – being able to organize accounts in groups and set different access permissions on a Gdisk’s folders would be a great start.
Another useful additional feature, which as I understand is due sometime soon, is offline availability of the applications. An internet connection is not always available and an entire company can’t just stop working because IT people in the basement are messing around with routers.

Having said that it’s not only functionality-related issues Google has to address but also privacy and security questions. If they want more of our data to be stored on their servers, and with Gdisk it wouldn’t only be images and documents but all sort of data we may not want other people to see, we expect Google to have some pretty satisfactory answers ready – Especially when we’re talking about reserved and potentially vital information its business customers save in the cloud.

Technology, Thoughts, Video, WWW

BBC streams video the Microsoft way

I was invited a few months ago to test the BBC iPlayer. I quite liked the idea and I’m very interested in everything streaming. I also tested Joost and Zattoo.

Unfortunately when the BBC decided to send me the invitation I had a Mac laptop and my computer at home was running Vista, !@”#$. Obviously the player doesn’t work with Mac OS. I wasn’t, however, expecting the application not to work with Vista. When I tried to activate the application I was greeted by an error popup telling me that the system works only with Internet Explorer on Windows XP. Fair enough.

iPlayer web interfaceI have now switched back to good old XP and am ready to test this player. Still no Firfox, I opened IE and logged in the BBC site with my beta account. Beautiful web interface, which works fine with Firefox too. After installing the small application an icon appears in the systray which is called: “Your Library”. Say what? I thought all BBC content was part of my library and I just had to click on any video to have it streamed to me instantly.

Apparently not. Opening the “Library” just says: you have nothing. Ok, so back to the BBC website with my newly installed IE plugin. Click on the new episode of Little Britain and right away the site told me that the video was being downloaded. Again, my only reaction is “Say what?”. So back to the library window, which is not even an application but a small popup running the IE engine with the BBC plugin to access your system information. This bright pink window is telling me that a 300 MB video is being downloaded and I had 500 MB of space on my hard drive allocated for the library.

My video is now here, excellent, click on play now. Another small IE-powered popup opens and, disguised under a customized interface, Windows Media Player starts playing the video, a bit slow though, but you can’t expect much from WMP. Oh yeah, almost forgot, my license for the video expires in 3 days then it’s a useless 300 MB file.

Now I don’t pretend to know any better than the BBC people working on this. They must have thought it through quite thoroughly. I do have a few questions though.

  1. Only windows XP and IE? Ever heard of flash? YouTube? Anybody? How about Mac users? Linux? Vista?
  2. Download a 300 MB file that I have to trash after 3 days? Planning to save on your Bandwidth anytime soon?
  3. Ever heard of streaming using P2P technology to save aforementioned bandwidth and offer more content? Seriously you should check out Joost, they started developing their application before you did.

Please, please have a look at Joost. Multi-platform, more content and less bandwidth usage because we stream to each other using BitTorrent‘s P2P technology. Maybe Janus Friis and Niklas Zennström will take pity on you and give you a small channel on their platform to share your content, if you ask nicely.

Thoughts, WWW

US goes on holiday, the internet goes bye bye

Does the blogging world revolve around the US? This blog statistics seem to confirm it.

I had a look at the statistics for both my blog and other websites I work on (charts below) and it would seem that most of the internet traffic for blogs is generated by the US. Everything stopped for thanksgiving. There’s no denying that even in Europe we’ve seen the internet boom and the whirlwind of activity and startups ensuing from it, but the traffic, the user-base, seems to be still mostly US based.

Are we European still a bit behind in terms of internet-mentality? With China being still pretty much behind a great-wall internet-wise and broadband connections not being quite as readily available in most areas of Asia as they are in the west is the only big “market” the US?

The Big Deal - statistics