geo-targeted content: internet actual reality or actual world virual reality?

I was reading Inside AdWords: Use Google Maps to target your customers about Google letting advertisers geo-target AdWords campaigns using address and distance radius.

This is a great feature by itself. But it is something more. It’s one more step to consolidate Internet’s “virtual” reality with real world’s “actual” reality. It means that you can put Internet ads “on this street”. On the other hand, you can use tools (Google Analytics is one of them) that can show you “how many people on this area read your blog”.

Imagine this. You turn on your laptop, you get in your car and as you drive across the country you connect to different wifi hotspots. Each time, you reload the same web page that contains ads from Google. Then you realize that depending on where you are, the ads change. Much like the view from your window. In a way, through your browser you see something that “belongs” to the location you are in, just like the mountain on the horizon, or this funny building across the street. The Internet becomes part of the scenery.

Think of it for a minute. The moment you geo-tag a piece of information (be that a photo, a blog-post, a video or audio recording, or whatever kind of media) you link it to an actual location. Right now, we are usually able to find the location associated with a geo-tagged piece of information. Soon, given the right tools like mobile Internet and smart location-based services and apps, we will be able to access this information (literally) “on the spot” -see or hear or read it just like you would see the sunset, hear the noise from a nearby factory or read the sign across the street.

Web2.0 Products I need :-)

Richard MacManus has an interesting article about Web 2.0 Products We Need (But Which Don’t Exist Yet). Regulars of this blog will already know my list, but here it is in titles (all 3 of them!):

1. A service that will let us mix feeds according to user-defined rules.
2. Wider use and services that take advantage of geo-tags.
3. A microformats-aware search engine.

Where’s Waldo?

A couple of days ago I was in Rome with my girlfriend, Elina. Typical tourists as we were, we kept shooting photos with our digital camera. Storage is no longer an issue, I had my 20GB iPod photo with the camera conector. Most of the photos were shot in crowded places, some of them nearly packed with other tourists shooting their own photos and video.

“Have you ever considered how many photos around the world are you in?” Elina said.

I guess a lot, but there is no way to find them, is there?

Then I realized that had I bought the portable GPS receiver I wanted to (in order to geotag [1] my photos), me and others could search for on-line photos with the GPS coordinates and date of my journey. Something like “search for photos taken at the Coloseum on 2005-11-21 between 13:00 and 13:30″. If people geo-tagged their photos, I’m sure I could play my personal version of “Where’s Waldo” with some success…

I do not think that this could be done now, I do not know of a service that would allow me to query using GPS coordinates and date (or is there?), but wouldn’t it be realy cool if Flickr, Google, or Yahoo! supported such a feature?

BTW, you can find a photo of me here. If you were in Rome during the last weekend and you think I’m in one of your photos, it would be fun to send it to me!


[1] Read How-to: easily geotag your Flickr photos, HOW TO GPS Tag Photos: Flickr, Mappr, Google Earth….

Web services business models

I liked Webjillion’s Will Your New, Favorite Web Application Last?, straight to the point.

Web services is all about building value on top of services you do not (usually) own or control. Does this make sense? Is there a (development/business) model that could make such services more viable?

It looks like more and more people are ready to pay for services provided through the Internet -something a couple of years ago sounded like a blasphemy.

It’s all about the feeds

Burning Questions, the official FeedBurner blog features an interesting analysis today. What they say is that feeds are not just about blogs anymore -and they are right.

Traditionally, feeds (RSS, ATOM and RDF) have been used to distribute a website’s content. Subscribe to a site’s feed and you get the “headlines”, a stream of items usually composed by title, publication date, abstract and main body.

What most web developers know is that this simple “schema” fits well enough not just “news” but other kind of data too. Take for example Flickr “photo streams”, where instead of a “body” you get the URL of a photo, del.icio.us bookmarks where each item is a user’s bookmark, or podcasts where each item “encloses” an audio or video “attachment”. One of the nice things about RSS, RDF and ATOM is that they are flexible enough to support uses like the above and (by design) are not limited to “news” items. Nowadays, almost everything is published as a feed -add wish lists, alerts, personalized search results and much more to the above mentioned.

Users tend to think that a feed derives from a web page content, when usually they are both representation of the same information that reside in some sort of database. The main difference is that usually web pages are focused on presentation, when feeds are focused in structure. As mentioned above, a feed has an inherent structure that makes it ideal for other programs or services to “consume”: parse, understand and extract “what matters” per case.

FeedBurner already gives some of these services. You can easily “mix” your blog news, with your Flickr photo stream and your del.icio.us bookmarks. You can even mix them in different ways, say gather all your weekly del.icio.us bookmarks in one single post in the new feed generated. Or you could create your own “video channel” of videos indexed by Yahoo! and tagged with a certain keyword, like “funny”, or “football”. Or automatically post your del.icio.us bookmarks as a single post in your blog on a daily basis using yadd. Or combine a GeoRSS blog feed with geo-tagged Flickr images. Or upcoming.org events with rsswether.com

I’m quite sure (and Burning Questions’ article point to this direction too) that we will see more of this “rip-mix-burn feeds” trend in the close future. We should expect to see new tools that allow us to extract, parse and combine information from feeds into new feeds, as well as presenting those new feeds in new ways, generating unexpected results, services and added value.


UPDATE. Check out this article too: The Second Coming of Content and RSS Feeds

Google Base vs. microformats

It was one of my first thoughts when I saw Google Base: why not endorse microformats? BuzzMachine has a more detailed article on the same subject Google Base v. microformats, and some interesting comments by readers.

Long weekend at Rome

A long weekend at Rome. What a nice break…

Rome Rome

St. Peter, Vatican

We stayed at WRH Suites. Realy nice and stylish rooms, 5 minutes walk from the Termini Station (main train station, metro station and where most busses leave from) AND free Internet in the room (they even had a laptop in the room you could use)! Good value for money, the guy at the reception was always happy to give information and hepl us plan the day. If you are looking for a nice place to stay in Rome, check them out.

We just arrived in Athens, but here are some of the last photos I shot at Rome (Fiumicino) Airport:

Fiumicino Airport, System ERROR Fiumicino Airport, System ERROR

Excuse me for being bitter, but the way I see it nothing should crash in an airport…

GoogleBase: all your Base are belong to us

Google Base made its debut today. It’s a free database engine with a web interface. It’s powerful and simple to use like most Google services. Users are able to define an item’s “properties” (if it is a book it could be ISBN, title, author, publication date and so on, if it is a car it could be brand, model, color, etc.), tag each item with up to 10 tags and attach a photo. One can expect to see innovative uses for this new service. People are already talking about Google entering the “listing” industry dominated until now by newspapers, Craigslist and eBay.

What I do not like is that all this information (consider the scale) will not just be indexed by Google, but also hosted by them. It’s free, but it’s like been given land to built your home or business that you do not really own. The actual owner is Google: all your Base are belong to us? It’s like Google is trying to be the Internet.

I would have loved to see some API that would allow developers to manipulate and extract data from Google Base. Then I would be assured that whenever I want I would be able to extract all my data, delete them from Google Base and set up my own site. But it’s not there -hopefully not yet.

Or even better, I would prefer to see Google indexing and embracing microformats, a clever use of XHTML to describe data. This would allow a whole new industry to develop around Google, new tools and services in an open environment.

I’m really worried.

Google Analytics wishlist

Dear Google, my Google Analytics wishlist is quite short:
- Please integrate my AdSense account with it! I would like to be able to see how much I make from each page and each visitor segment.
- Give me an alternative tracking mechanism, like a small .GIF I could use to track my RSS feeds.

Not much, is it?

Google Analytics: is it a GoodThing ™?

Yesterday, Google rolled out their new service, Google Analytics. Google Analytics is actually “Google hosted Urchin web stats”. It’s nice, powerful and free. But is it a GoodThing ™?

First of all, Google Analytics is not a replacement for log-based traffic analysis. It will track (at least for now) only modern browser HTML content requests, leaving out things like RSS feeds, audio or video downloads, etc. What’s more it is unable to measure traffic in MB or break it down to file types (HTML, images, etc.), metrics web masters rely upon to optimize site performance and minimize hosting costs.

But Google Analytics is a really powerful marketing tool. It brings data mining to the masses. It will introduce the average Joe to “conversion” and “funnels” and will give them the power to take advantage of metrics available until now only to a small percentage of web publishers. I think this is a GoodThing ™.

What’s more I hope that the wide use of such a tool will set a de facto standard for web metrics. We will finally be able to talk about “Unique Visitors”, “Absolute Unique Visitors”, “Visits” and “Pageviews” without the need for an asterisk next to each term and an appendix defining the terms -it will be implied “as measured by Google Analytics”, even when other tools are used. This is a giant leap for medium to small-sized on-line advertising. It is a great benefit for the small sites to be able to provide the same metrics as the big guys. It will also make it much easier for agencies to advertise on small sites. This is a GoodThing ™ too.

At what cost? Well, you will have to share this knowledge with Google. They will know as much as you do about the advertising power of your site, even more. They will know your visitors/users/clients habits. They will know a large part of the Internet much better than anyone else, a power that may seem frightening. It is to me, I have to admit.

However, Google does not provide this service in a way that locks you in. You could switch to a different service any time. Or just stop Google Analytics -unless you get hooked to their stats for good.

GoodThing ™ or not? I do not know. But I think it is safe to use them and take advantage of them for now.