Category Archives: WWW

Draggable directions with Google Maps APIs

I love calculating driving directions on Google Maps and then drag the blue line marking my directions to change the route. Everything is updated automatically on the page and the directions are re-calculated to go through the new point I defined.

I’ve been playing around with the Google Maps APIs recently and imagine my disappointment when I found out that Google does not allow directions calculated through the APIs to be dragged around.

Not put off by this I decided to try and replicate the directions-dragging myself. How hard can it be?
As it turns out. Very, at least if you want to make it look as smooth as Google’s own solution.

When generating directions Google Maps adds a GPolyline element overlay to your map. You can set the returned line to be editable but this makes an awful lot of vertices appear on it, which makes reading your directions quite hard. Even so, once you dragged one of this vertices around you are not editing the route but just changing the shape of the line.

Mine is not a complete solution and I’m interested in feedback and ideas on how to improve it.

First off calculate your directions:

map = new google.maps.Map2(document.getElementById('map_canvas'));
var wayPoints = [];
wayPoints.push(startPoint.getLatLng());
wayPoints.push(endPoint.getLatLng());

var myDir = new google.maps.Directions(map)
myDir.loadFromWaypoints(wayPoints, { travelMode: G_TRAVEL_MODE_DRIVING });

This will calculate your driving directions and plot a GPolyline on your map. The line is easily accessible in the GDirections object once the directions are calculated. To intercept this I have decided to use the addoverlay event on the GMap object. This event is triggered every time something is plotted over the map (says on the tin).

google.maps.Event.addListener(myDir, "addoverlay", function() {
var dirLine = myDir.getPolyline(); // Get the polyline from the directions object
});

At this point we can ask the APIs to make the GPolyline editable. This will make the vertices appear on the line and make then draggable. As I said before this only changes the shape of the line and doesn't actually affect your directions object. Luckily the GPolyline comes with a nifty event called lineupdated.
This is triggered once the user has finished dragging a vertex. By intercepting this we can look through the vertices and know what's been changed on the line and where the vertex has been moved to.
In order to do this we must also know the previous position of the vertices (latitude and longitude) to be able to compare the old and the new "edited" line.
Another challenge is the fact that the GDirections object can accept only so many waypoints (25 if I'm not wrong). Which means we'll have to add only the vertex that has changed to the directions and not all of them.

// In the addoverlay event also save the original vertices of the line
var origLine = [];
for (var i = 0; i < dirLine.getVertexCount(); i++) {
origLine.push(dirLine.getVertex(i));
}
// DONE saving vertices

// Now intercept the lineupdated event and add the new waypoints
google.maps.Event.addListener(dirLine, "lineupdated", function() {
routePoints = [];
for (var i = 0; i < dirLine.getVertexCount(); i++) {
var savedPoint = origLine[i];
if (!savedPoint || (savedPoint.lat() != dirLine.getVertex(i).lat() && savedPoint.lng() != dirLine.getVertex(i).lng())) {
routePoints.push(dirLine.getVertex(i));
}
}

// Now we remove the previous directions and recalculate the route
map.removeOverlay(dirLine)
calcRoute();
});

This works quite well but does not look as smooth as Google's solution.
The problem is that while you are dragging a vertex only that bit of the GPolyline moves and the rest stays in its original position. which makes the shape of your directions quite awkward while you are dragging. Unforunately the GPolyline does not come with a "startdragging" event, otherwise we could just recalculate the route every few seconds while the vertex is being dragged.

This is not the most elegant of solutions but it does the job.

Tagged , , , , , ,

SVG graphics with JavaScript

When I started developing TweetSentiment I decided that the interface should have as little text as possible. Most of the information I was interested in could be displayed graphically, with a chart.

So I looked at all the options available for chart generation.

  1. Backend code to generate a static image (JFreeChart or PHP)
  2. Flash object to draw a chart retrieving the data from a URL
  3. Draw charts in JavaScript directly on the client’s browser

I’m not a huge fan of option one. Primarily because TweetSentiment is hosted on a tiny linux box which would not be able to handle the load for the traffic the site gets, also because it’s a static image – it’s just not very funky – no interaction possible.

Option two would certainly create spectacular looking charts but I have almost no experience with flash and I wasn’t about to start learning a new language/technology. Plus I’m not into browser plugins if I can avoid them.

JavaScript is a language I’m familiar with and I remember seeing some cool-looking charts generated with Dojo. Unfortunately for TweetSentiment I have used jquery since the most important thing for me there was DOM manipulation (and jquery is just better for that).
I then started shopping around for jQuery plugins to generate charts. There are a few around but none of them impressed me. They just weren’t as good looking as I’d hoped nor they were interactive.

By coincidence I stumbled on RaphaelJS. A JavaScript library to draw Scalable Vector Graphics directly from JavaScript based on jQuery. I tested the samples on the website with a few browsers and I was happy to discover it worked just fine with all of them.

Scalable Vector Graphics (SVG) is a family of specifications of an XML-based file format for describing two-dimensional vector graphics, both static and dynamic (i.e. interactive or animated).

The SVG specification is an open standard that has been under development by the World Wide Web Consortium (W3C) since 1999.

I also discovered that there is a charting library built on top of RaphaelJS, which is exactly what I was looking for. However, being a geek, I decided to go ahead and try to develop something on my own. You know, just for kicks.

As I delved deeper into RaphaelJS I found the library to be incredibly powerful. It’s a shame that the documentation provided on the website lets it down a bit.
The most powerful bit is the ability to extend objects and attach new functions to them. Something scarcely mentioned in the available documentation.

For example if you need to use curved lines (paths as SVG calls them) you can just defined a default function you can then call from your code simply by adding it to the el “object” in RaphaelJS

Raphael.el.curveTo = function () {
  var args = Array.prototype.splice.call(arguments, 0, arguments.length),
  d = [0, 0, 0, 0, "s", 0, "c"][args.length] || "";
  this.isAbsolute && (d = d.toUpperCase());
  this._last = {x: args[args.length - 2], y: args[args.length - 1]};
  return this.attr({path: this.attrs.path + d + args});
};

Another very useful function I found in one of their samples is the andClose() This is used to close a polygon you have started drawing with paths. No matter where you got to it will reconnect to the initial point.

  Raphael.el.andClose = function () {
  return this.attr({path: this.attrs.path + "z"});
};

This can then be used this way using chaining.

RaphaelJSElement.lineTo(x, opts.height - bottomgutter).andClose();

I'm still developing the chart library I used in TweetSentiment and I'm planning to publish it here with some documentation under MIT licence.

Tagged , , , , , ,

JavaScript games – more thoughts on O3D

This week I wrote a post comparing O3D and WebGL.

Today I have finally spent some time playing with O3D and managed to implement some very simple applications.

Now that I have a clearer understanding of what O3D can and can’t do I have given some thought to the possibility of writing videogames in JavaScript. As I mentioned in my previous post I can’t see myself playing something like Fallout in a browser window. Nonetheless I can imagine simple multiplayer games, something like Monopoly or Risk, working this way.

I have developed quite a few JS applications that allowed users connected at the same time to interact with each other. It’s very simple, constant AJAX posts and gets with a server keeping the state of the interaction. Imagine something like GTalk integrated inside GMail.

This is all well and good when the interaction is limited to a few chat messages or coordinates of the mouse pointer on the screen, but multiplayer videogames have to shift a massive amount of data every second. When you play Gran Turismo online the position, speed and state of each player’s car must e synched across all the participants as often as possible. Add chat/voice data to that and you’ll soon realise that 30 players for one game calling your server at the same time to get and post data is just not manageable. Furthermore to ensure the timely delivery of the data to each client you are much better off pushing the data to the client rather than relying on it to call your server.

What O3D should add to its APIs is a DirectPlay alternative. Multiplayer support built straight into O3D. This way your JavaScript game will be able to establish peer-to-peer communication between all the clients without having to stress your servers. Simple socket communication giving the developers the ability to push data between all the peers connected.
Network support by being built inside the O3D plugin could also deal with all the annoying connectivity issues such as “punching” through NATs.

Without properly implemented network play I don’t think we’ll ever see 3D games flourish in your browser window.

Tagged , , , , , ,

WebGL and O3D

You may have read recently that Khronos is implementing something called WebGL. The objective of the project is to expose all of OpenGL ES calls to javascript. Thus allowing hardware accellerated 3d graphics within a browser.
Google has also been working on an alternative, called O3D.

Let’s first talk about the technical differencies between the two projects.

O3D and WebGL while both trying to bring accellerated 3D graphics to the web have taken two fairly different courses. As I mentioned in the introduction to this post WebGL’s plan is just to expose to JavaScript OpenGL ES 2.0 APIs. Whereas Google’s solution is based on a browser plugin.

If we think about this we’ll soon realise that WebGL depends entirely on JavaScript. JavaScript, as of today, is a fairly slow language. This point was made in a discussion thread on the O3D project website.

WebGL, being 100% dependent on JavaScript to do an application’s scene graph, is going to have serious problems drawing more than a few pieces of geometry at 60hz except in very special cases or on very fast machines. This means WebGL requires JavaScript to:

*) do all parent-child matrix calculations for a transform graph.

*) all culling calculations (bounding box to frustum or other)

*) all sorting calculations for dealing with transparent objects.

*) all animation calculations.

As an example the kitty demo in O3D is doing linear interpolations on 2710 floats to animate 170 transforms. The point is not that the artist that created the kitty should probably not have used 170 bones. ;-) Rather the point is it seems unlikely that JavaScript
will be able to do that anytime soon and if it can then just add more than one kitty to pass its limits.

Also we have to keep in mind that not all hardware supports OpenGL ES.

O3D, By virtue of being a browser plugin written in C++, so an additional (hopefully fast) abstraction layer on top of the GPU, allowed Google to define a new set of APIs to expose to JavaScript and keep us (the JavaScript developers) away from the hardware. O3D will take care of the interaction with either DirectX or OpenGL.

Furthermore Google has open-sourced O3D through its Google Code website. Which means we can all have a look at their code and participate in the project. This resulted in a lot of documentation being available. For a full overview of how O3D works check out the technical overview on the O3D project page.

Do you think that this is the making of a new “Standards war”? Both Google and Khronos are adamant that they are not competing. However I believe that ultimately only one project will come out as a standard. As the complexity of 3D web applications increases it is not feasible to write code for both “APIs”. The only question for me at this point is who will come out on top.

To answer this I would look at the audience of the two projects. OpenGL has been out in the wild for a long while and many developers of videogames, or general graphic application, are already familiar with the APIs and the way it works, therefore it would probably make sense for them to embrace WebGL.
Nonetheless O3D still stands a chance. For a very simple reason. It’s the web we’re talking about.

Frankly I can’t see myself playing a big videogame like fallout in a browser window anytime soon. These APIs will be used to enrich web application. Some examples are already coming out using O3D. Have a look at this Home Designer. Can’t you already see IKEA using it.
My point here is that we’re not likely to see game developers switch to the web. We’re much more likely to see web developers start working on games or application involving 3d graphics, and this is where Google wins.

O3D extends application JavaScript code with an API for 3D graphics. It uses standard JavaScript event processing and callback methods.

As a web developer I can keep writing JavaScript code as I’m used to without having to change the way I think to how a game developer does.

What do you think?

Tagged , , , , , , ,

Google tackling online storage

After much anticipation and hype the Gdrive seems to be on its way, or so the WSJ reports.

A Google spokeswoman declined to comment on any specific online storage plans beyond what it already offers as part of its email and other services. But she said in a statement that “storage is an important component of making Web [applications] fit easily into consumers’ and business users’ lives.”

Most companies, from small businesses to big giants are moving their environments online to make documentation/presentations or whatever else may be needed available to their employees, wherever they may be whenever they want.

As I said in a previous post Google is pushing its online productivity suite and a shared online storage could definitely give an additional boost to the entire system.
The online storage is one of the few reasons why I use .Mac, the second rationale behind the choice is that the interface is just brilliant, the iDisk is mounted as a file system and directly accessible from my Finder.

In my opinion if Google really wants to make the Gdisk a must have for small/big businesses a client software to access the data is vital – not because it works better, but because it is a step final users have to go through to get used to online storage solutions. Most people don’t, and won’t for a while, use Writely or Google’s new PowerPoint-ish software – they’ll keep creating documents in their local environment and the sensation of accessing a local drive to save their work will make them feel somewhat more secure.

For its office components to attract big businesses Google still has do a great deal of work on the corporate accounts handling side – being able to organize accounts in groups and set different access permissions on a Gdisk’s folders would be a great start.
Another useful additional feature, which as I understand is due sometime soon, is offline availability of the applications. An internet connection is not always available and an entire company can’t just stop working because IT people in the basement are messing around with routers.

Having said that it’s not only functionality-related issues Google has to address but also privacy and security questions. If they want more of our data to be stored on their servers, and with Gdisk it wouldn’t only be images and documents but all sort of data we may not want other people to see, we expect Google to have some pretty satisfactory answers ready – Especially when we’re talking about reserved and potentially vital information its business customers save in the cloud.

Tagged , , , , , ,

BBC streams video the Microsoft way

I was invited a few months ago to test the BBC iPlayer. I quite liked the idea and I’m very interested in everything streaming. I also tested Joost and Zattoo.

Unfortunately when the BBC decided to send me the invitation I had a Mac laptop and my computer at home was running Vista, !@”#$. Obviously the player doesn’t work with Mac OS. I wasn’t, however, expecting the application not to work with Vista. When I tried to activate the application I was greeted by an error popup telling me that the system works only with Internet Explorer on Windows XP. Fair enough.

iPlayer web interfaceI have now switched back to good old XP and am ready to test this player. Still no Firfox, I opened IE and logged in the BBC site with my beta account. Beautiful web interface, which works fine with Firefox too. After installing the small application an icon appears in the systray which is called: “Your Library”. Say what? I thought all BBC content was part of my library and I just had to click on any video to have it streamed to me instantly.

Apparently not. Opening the “Library” just says: you have nothing. Ok, so back to the BBC website with my newly installed IE plugin. Click on the new episode of Little Britain and right away the site told me that the video was being downloaded. Again, my only reaction is “Say what?”. So back to the library window, which is not even an application but a small popup running the IE engine with the BBC plugin to access your system information. This bright pink window is telling me that a 300 MB video is being downloaded and I had 500 MB of space on my hard drive allocated for the library.

My video is now here, excellent, click on play now. Another small IE-powered popup opens and, disguised under a customized interface, Windows Media Player starts playing the video, a bit slow though, but you can’t expect much from WMP. Oh yeah, almost forgot, my license for the video expires in 3 days then it’s a useless 300 MB file.

Now I don’t pretend to know any better than the BBC people working on this. They must have thought it through quite thoroughly. I do have a few questions though.

  1. Only windows XP and IE? Ever heard of flash? YouTube? Anybody? How about Mac users? Linux? Vista?
  2. Download a 300 MB file that I have to trash after 3 days? Planning to save on your Bandwidth anytime soon?
  3. Ever heard of streaming using P2P technology to save aforementioned bandwidth and offer more content? Seriously you should check out Joost, they started developing their application before you did.

Please, please have a look at Joost. Multi-platform, more content and less bandwidth usage because we stream to each other using BitTorrent‘s P2P technology. Maybe Janus Friis and Niklas Zennström will take pity on you and give you a small channel on their platform to share your content, if you ask nicely.

Tagged , , , , , , ,

US goes on holiday, the internet goes bye bye

Does the blogging world revolve around the US? This blog statistics seem to confirm it.

I had a look at the statistics for both my blog and other websites I work on (charts below) and it would seem that most of the internet traffic for blogs is generated by the US. Everything stopped for thanksgiving. There’s no denying that even in Europe we’ve seen the internet boom and the whirlwind of activity and startups ensuing from it, but the traffic, the user-base, seems to be still mostly US based.

Are we European still a bit behind in terms of internet-mentality? With China being still pretty much behind a great-wall internet-wise and broadband connections not being quite as readily available in most areas of Asia as they are in the west is the only big “market” the US?

The Big Deal - statistics

chart2.jpg

Tagged , , , , , , ,

Kindle ignited

Amazon has finally launched its E-Book reader, Kindle, available now for 399 $ with access to 88,000 books including 100 of the 112 New York Times best sellers.

The entire idea sounds pretty exciting, especially considering that the big A will pick up all the “phone bills” for its Whispernet service, which is based on Sprint’s EVDO.

Whispernet allows Kindle owners to wirelessly shop the Kindle Store, dowload and receive content and works out the box — no setup required. Newspaper subscriptions cost $5.99 to $14.99 per month and Kindle Magazines cost between $1.25 and $3.49 per month — each is available for a free 2 week trial. Oddly, blogs will cost you $0.99 per month to subscribe. Just running down the specs again: internal storage for 200 titles (more via SD expansion), 10.3 ounces.

Today Seth Goding was advocating Amazon’s debut in the publishing world, or rather, virtual publishing.

I love the idea and fully recognize its potential, definitely the future, I’m afraid not mine. I love reading books, I do read every night and love my collection, which I always display proudly on my shelf. Not quite so proudly right now because I have exceeded the amount of avilable space and have started piling books all over the place, but I’m working on that.

Update: GigaOm has an interesting article I completely agree with. Inexpensive ebook reader and earn money through the books. Whoever is likely to buy an ebook reader is certainly not looking to a technology packed gizmo.

Update: Kindle hardware features

  • It doesn’t use a generic RSS aggregator — it’s Amazon-selected blogs only (and they “want every blog they can get”). Blogs that are aggregated by the Kindle get a revenue share with Amazon, since it costs money to get those publications.
  • The side scroller is, as we expected, a polarized PNLCD (pneumatic LCD). It looks amazing.
  • It’s SD only, not SDHC.
  • It uses the Kindle file format (which is a variant of structured HTML), but also accepts Word and PDF files (but only via email since they need to be converted by Amazon), Mobi, HTML, plaintext, and image files like JPEG, GIF, and PNG. Sorry, no RTF.
  • Oh yes, it supports Audible! Oh, and a little, unused file format called MP3.
  • It has a user-replaceable, 1530mAh battery
  • You can bind five or six devices to a single account, and share books you’ve purchased to those accounts. There’s no simultaneous reading lock, so if you and your significant other are on the same Amazon account you can both read the same book at the same time on your Kindles.
  • Amazon is also releasing the Digital Text Platform, which allows users to upload their own content to the Kindle store for sale and download.
  • The $9.99 price point is the sweet spot, but there are books for sale from the Gutenberg project for under $1 (if you don’t want to download them for free yourself), and upwards of that quoted $10 price point as well.
  • Amazon wouldn’t say who makes the device, just that “it’s an OEM in China.”
Tagged , , , , , , , ,

PayPal to launch “Secure Card”

The internet-payments-king is set to officially launch a new Secure Card service tomorrow. Thanks to a deal with MasterCard PayPal will be able to generate a “temporary” credit card number for each of your payments so that you won’t have to give away any private information.

The software apparently works only on Windows and puts a small credit-card icon in your task-bar. Whenever you feel like generating a new card all you have to do is double-click on the icon and a new window will pop up with all the details you need (ie number, expiration data, CVV). The software for windows also goes to all the trouble of automatically filling in any ecommerce form you may be presented – this, obviously, only if you use IE.

“From a merchant’s perspective this looks like any other MasterCard transaction,” said Chris George, director of financial products for PayPal. “And it’s just another PayPal purchase to the customer.”

This is PayPal’s answer to the Google checkout system, which tries to make you shopping online easy by directly storing your personal details and automatically authorizing payments with partner merchants. My vote goes to PayPal, and apparently most people agree with me if we look at the data.

In the third quarter ended September, transactions through Web merchants grew 61 percent to $5.38 billion from a year ago, while overall PayPal transaction volume grew 34 percent to $12.22 billion over the same period.

The idea is absolutely great and they have apparently been testing it in private-beta mode for about a year.

The quirk, oh yeah, there is one. As I said before, the software is for windows, with IE plugin. Redmond says thank you.
I say WTF. I am a mac/linux user, can’t you just see your way clear to give me full access to the service? You are a web company. Born and grown on the web, why do you start developing client applications in 2007?

Tagged , , , , , , ,

FaveBot – RSS aggregation unleashed

I recently came across FaveBot. The site aims at aggregating and filtering RSS feeds from the most renowned news sites and blogs all over the internet and has recently received a major update functionality-wise.

The site doesn’t exactly look beautiful but its minimalistic simple interface makes it incredibly easy to use.
Once the really brief signup process is completed you are all set to go. Through the “My Trackings” tab you’ll be able to specify search keywords and the categories of feeds you want the site to search in. Results are immediately accessible from the “MyDiscoveries” section.
The categories currently available on the site are blogs, books, DVDs, events, music, news, photos, podcasts and videos.

Another spiffy functionality is the possibility to upload your iTunes library file and let the site aggregate for you all relevant news about your favorite bands.

The only thing I find quite confusing is the fact that while you’re allowed to create multiple tracking filters the results are then all mashed together. the only way to access the outcome of a single search is to click on the number of results in the Trackings summary page.
The site would, perhaps, grow at a considerably faster pace by letting its users add new fees to its database. At the moment the list of feeds to be spidered seems to be “hard-coded”.

The idea looks solid and fairly useful. With just a few fixes and usability improvements it could probably turn into a profitable business in its niche between Feedburner and Digg.

Tagged , , , , , , , , ,
Follow

Get every new post delivered to your Inbox.