Friday, March 27, 2009

Myspace does reality TV

Thanks Mashabale for this latest move in Myspace land

March 25th, 2009 | by Adam Ostrow4 Comments

Now this makes sense. MySpace is getting into reality TV with a new Web-based series called “Married on MySpace” that will chronicle one couple’s journey to their wedding day, with MySpace members driving much of the decision-making in planning the event.

The process starts with couples submitting videos to MySpace and users voting on who should be featured. Once that has been determined, MySpace users vote on other aspects of the event, like selecting what the bride and groom wear, where they celebrate their bachelor and bachelorette parties, and the wedding location.

In all, there will be 13 websidoes of Married on MySpace, culminating in the selected couple’s wedding day. The series is being produced by Endemol USA, the company behind reality TV shows like Big Brother, Fear Factor, and Deal or No Deal, so the quality (or lack thereof depending on your point of view) should be up-to-par.

While MySpace (MySpace reviews) has conceded the race to be the top social networking site, it still has a place as one of the most popular entertainment destinations on the Web. Original, relatively low-cost programming like Married on MySpace that involves the community is a smart move, and something we’ll likely continue to see.

The trailer for Married on MySpace is embedded below:

Married on MySpace Trailer


Wikirank: Find What’s Trending on Wikipedia




Thanks Mashable for this on a great tool for monitoring trends on wikipedia.

March 25th, 2009 | by Jennifer Van Grove
12 Comments

wikirank logoWikirank does for Wikipedia (Wikipedia reviews) what sites like Compete do for websites. It’s a nifty analytics tool that tracks trending topics on the world’s largest online encyclopedia, displays the 10 most read articles in the last 30 days, and gives users the ability to compare stats for up to four different topics.

Wikirank uses the actual usage data from Wikipedia servers to give visitors a better global or custom view of what’s happening across the information hub. Cooler features include the ability to graphically compare impressions on four different articles, embed graphs, view Wikipedia entries, and quickly search for related content on Google News, Twitter (Twitter reviews), or The New York Times.

wikirank-twitter

We really like Wikirank’s trending topics on the home page. Topics are ranked by percent change and certainly provide a great graphical view of major fluctuations in page views. Plus, the most read topics in the past 30 days give us an awesome glimpse at what’s hot over a longer duration.

wikirank-home-page

We love the tool and can’t wait to use it to start comparing pop culture and Web trends, especially since Wikipedia has 10 million plus articles and is most likely one of the first places mainstream audiences go for information on the Web. What do you think of Wikirank? Tell us in the comments.


Social Collider - the most seriously cool, and heavyweight twitter visualisation yet

Thanks guys for the post and the work. Stunning and, just possibly, very very useful

Social Collider is a new collaboration with Sascha Pohflepp, a JavaScript visualization to reveal cross-connections between conversations on Twitter. The project launched just 2 days ago and has been commissioned by Google for their Chrome Experiments collection and was produced by the friendly peeps at Instrument. Social Collider acts as a metaphorical instrument which can be used to visualize how memes are created and how they propagate. Ideally, it might catch the Zeitgeist at work.

Social Collider in action

Concept

In December Sascha and me were both independently contacted about contributing to the Google Chrome Experiment project. We decided to rack our heads together on this one, since we both much rather liked to build something which would qualify as a browser experiment as per brief, but also could become something bigger & more worthwhile over time. In several meetings over coffee we slowly narrowed down Sascha's general visualization idea to the level of trying to show one's own data traces in context or contrast to things going on around us, which would possibly influence our moods and actions without us realizing consciously. Initially we wanted this context to be relatively removed both thematically and in terms of scale (personal vs. societal) and were thinking about plugging into energy consumption, other environmental datasets, the weather or also news headlines. The problem with the first two still is obtaining data in a sufficiently granular format to be actually meaningful (i.e. not summed values per year or aggregated per country).Massive uptake of grassroots data portals like Usman Haque's Pachube will hopefully change that in the not too distant future. In hindsight, not focusing on the weather also proved to be a good thing since Use All Five already covered that topic with their fantastic smalltalk experiment, which is also part of the Chrome collection. Since we both have been vivid Twitter users for quite a while now, we knew that people are using it to discuss really anything, incl. the topics mentioned above. The realtime granularity combined with the ad-hoc discussion element is something I've been increasingly treasuring on Twitter because it adds personal contexts, opinions and feedbacks to the “data”, be it the weather, music, politics, geekery etc. Of course other platforms like Facebook have that too, but for our purposes Twitter was the better option since it does allow for socially far more widespread conversations (it doesn't have any concept of groups, tribes or networks. Anyone can talk & reply to anyone they wish without jumping through any hoops). It also has the benefit of well (better) thought-out APIs.

Visualization

Technically, as well as for time reasons, we decided to create a pure clientside JavaScript visualization. This decision provided a great creative challenge for us, but also limited our choice of easy-to-access compatible webservices even more. To satisfy the instant gratification part of being a browser experiment, we also had to exclude any data coming from APIs requiring 2-step authentication and we too made a conscious decision to avoid the dreaded Password Anti-pattern. The last missing key ingredient needed was a strong metaphor. As any hobby psychologist knows, good metaphors are a key enabler for (successful) visualizations. On the other hand the majority of network visualizations today are based on the ”rocks & sticks” metaphor (thanks Mike & Tom! :), basically assuming nodes as particles and connecting them with lines.This visual language has been culturally lifted straight off mathematical graph theory text books and of course it's hard (if not impossible) to totally break free of that established mental image, especially when the data we're dealing with is literally particular (microcontent) and loosely connected. Yet to add a twist to this classic, we decided to approach the visualization more like the creation of a painting. We would use slow reveals to give the user more time to better trace all identified connections, as well as place it in a conceptual environment and use a visual language which directly references particles. With the Large Hadron Collider launch from only a few months earlier still glimmering on our mental horizons, this became the perfect (if obvious) metaphor…

Mapping

Within this space, particles are mapped two dimensionally based on their position in time (vertical axis) and search query ID (horizontally). Search results of each query are automatically connected vertically via smooth, curvy B-splines (using a JS port of this) in the same color. If results from different queries are somehow related (see below), a spiral is first drawn around the older particle but will eventually connect the other related particles horizontally.The size of the spiral corresponds to the number of cross-connections the related message/tweet has accumulated. Hovering with the mouse pointer over particles displays their related message. Clicking on a node opens the selected tweet on twitter.com…

Data mining

Since Twitter messages have a hard limit of 140 characters, the community has come up with various syntactic sugar to add meaning & metadata. At current, there're 5 major potential connection axes in Twitter messages: @usernames, direct @-replies, #hashtags, retweets (RT), URLs posted. Using exclusively Twitter's Search API (via JSONP), we initially allow users to search for usernames, generic phrases or hashtags.These original search results are then analyzed for each of the 5 data axes and if matched, queued for secondary search requests. As this “spidering search” is ongoing the visualization space is being decimated into columns based on the number of successful search queries. If a query did not return anything its column is being removed to maximize available screen space. Once all queries have been executed, the connections between retrieved messages are slowly revealed.

Features & Shortcomings

Any visualization has strong points and shortcomings. Our aim was to fill a current niche and provide the means to create a fairly macroscopic picture of Twitter activity, by attempting to trace how content & memes spread through the network. Unlike the more ubiquitous line and bar charts of other Twitter visualizations, ours was supposed to give a qualitative, not necessarily quantative, overview. I also believe we have somewhat succeeded with this as these examples clearly show (click on the images to see bigger versions on flickr):

Social Collider 1h after launch

Social Collider 1 hour after launch

Social Collider 16 post launch

Social Collider 16 hours post launch

SXSW panel by John Tolva

SXSW panel by John Tolva

Guardian Open Platform launch

Guardian Open Platform launch by jaggeree

 BookCamp & PaperCamp weekend

BookCamp & PaperCamp weekend marked by the pink & red clusters near the top

Visually, the spirals have the effect of pen scribbles to mark hotspots, messages which have resonated in the community and have triggered re-tweets, replies or generally just kickstart a new meme (e.g. identified by a new #hashtag). The maps also show how quickly some of these trends propagate, spawning a multitude of messages in close succession (in time) and so causing clusters. However, we're also aware of various shortcomings of the visualization. These mainly become obvious when one wants to drill further down into the data. Things like filtering or zooming are not possible at the moment, but would certainly add a whole new level of functionality & usefulness… For example, the above mentioned clusters caused by events (e.g. #sxsw) or “major” news can be identified easily, but currently not easily examined without zooming functionality. Yet without attempting to sound defensive, it's good to remember this so far was primarily just a browser experiment after all… In fact, depending on the complexity of the returned data set, it's quite easy to bring your browser to its knees (especially Firefox… Sorry guys, I still love you! :).This in turn has most likely to do with the multitude of setTimeOut() threads spawned to slowly draw the connection curves and the sheer number of nodes in the SVG canvas used to create the visualization. Creating all particle nodes (sometimes several thousands) with 3 mouse event listeners attached each doesn't help performance either! So my cheeky side is quite happy to have created a challenging environment for the browser(s) too and I honestly was blown away how well Chrome kept its calm, regardless… Furthermore there's also room for improvement on the data analysis side: Because of the various URLis.gd creator) in use, sometimes links pointing to the same URL are not matched. Also the Twitter search API is not case sensitive so it can also happen that the wrong shortened URLs are associated in secondary queries. Both issues can be overcome, but again it's one of those things we simply didn't had time to implement so far. shortners (e.g. see my own

Future plans

I'm still thinking about adding support for other fairly ubiquitous services like flickr & del.icio.us, not only because there're strong overlaps with Twitter, but also because it would give potentially interesting insights contrasting/complementing Twitter messages with photos taken at similar times or links saved on delicious, which might provide further reading to links posted on Twitter… For example, one of our plans for flickr integration was to use the fantastic Pixstatic library to create colour fields in the background of the visualization, taking the average color of each retrieved image and blending them into each other, similar in style of a heatmap visualization.However, the stable version of Google Chrome currently hasn't got any support for ImageData access to pull off this feature. But the good news is that it's being worked on and is already implemented in the Chrome 2.0 beta version… It would be great to save & share generated visualizations by storing them in a link with original search term and timestamp, so they can be recalled anytime… (like: http://socialcollider.net/?q=from:toxi&t=1237561532) There're lots of other little ideas floating around and we're currently scoping out details how to take development further in practical terms, i.e. picking a license & hosting and preparing source code for an open source release… Please stay tuned!

Google Starts Ranking Twitter Search Results Pages

Thanks Blogstorm for highlighting this development in search ranking of Tweets. If Twitter opens up to google a little more, could twitter become a useful tool in increasing a brands serach presence?

by Patrick Altoft on March 23, 2009

Over the last couple of weeks it appears that more and more Twitter search results pages have started ranking on Google for “news” type queries.

This is interesting because Twitter has made no efforts to ensure the search pages are SEO friendly and even goes as far as blocking Google from spidering the pages using robots.txt.

User-Agent: *
Disallow: /search
Disallow: /*?

Google has obviously decided, algorithmically or manually, that even thought they don’t know what content is on these Twitter pages they are probably high quality enough to warrant high rankings.

The screenshot below is from the “gaza” search result. You can see that Google isn’t spidering the page and therefore can’t generate much of a description or use the pages correct title.

Twitter Gaza

The actual pages are likely being discovered in two different ways. The first is via the usual link discovery method where Google spots lots of links to a page and ranks it based on link data.

The second method Google might be using is to generate the Twitter results pages themselves. If there is a particular keyword that Google wants more results for they can just plug that keyword into the Twitter search page and generate a brand new page to suit.

We know Google is filling in forms on thousands of websites every day to try and index the deep web so it comes as no surprise that they are doing the same with Twitter.

The only surprising thing is that Twitter is actually working against Google rather than embracing SEO and using it to build marketshare and perhaps even revenue.


How Google Is Showing Off Chrome

Cheers Businessweek for this report on google's Chrome Experiment contest. Some pretty cool stuff, sans flash, being done here. I presonally love the browser. Just wondering how long before it hits critical mass and doing some of this cool stuff is worthwhile

A new project spearheaded by the search giant calls on designers and developers to exploit the potential of its Chrome browser

http://images.businessweek.com/story/09/600/0318_google.jpg

The Google (GOOG) home page has been ransacked. The familiar, colorful logo is upside down. The search box and the "I'm Feeling Lucky" tab have been uprooted and now point up at a rakish angle, jutting into white space. Other links have tumbled from the top navigation bar and lie in a heap on the side of the browser. What on earth is going on? Have the hackers taken over?

Yes, actually they have. But they were invited to do so. The screwy design is part of a new series, Chrome Experiments, that Google launched on Mar. 18 to demonstrate the potential of Chrome, the search giant's much-trumpeted yet little-adopted browser, which itself was updated with a handful of new features the day before.

The jumbled home page is actually a program called Google Gravity. British interactive design firm Hi-Res! recreated the search giant's regular home page, giving users the ability to wreck the joint. With the mouse, a user can spin the traditional elements into space. They soar, they careen, they bounce, until they settle higgledy-piggledy at the bottom of the browser window.

Whimsical, Fun—and Pointless

There are a further 17 inventive devices displayed in Chrome Experiments, which was commissioned by tech lead Aaron Koblin, who works in Google's Creative Lab at the company's San Francisco office. He handed a simple brief to his chosen anarchists: "Here's this browser. Make something cool with it."

In Browser Ball, you casually bounce a beach ball between different windows on the screen. Twitch loads a series of windows to guide you through the levels of an infuriatingly addictive game. In BallDroppings, the user plays around with an interactive sound-generation device to produce artistically atonal results. A couple of the experiments use data from the micro-network Twitter as a starting point to create intricate visualizations. Not all of the projects are entirely clear. The overall results are whimsical, fun and, well, ultimately fairly pointless.

There is, however, a serious mission beneath the sense of play. Google wants consumers and advertisers to see the sophistication and reliability of Chrome's technology. These toys are built using the ubiquitous development language JavaScript, which has caused browser performance problems in the past. Given that many of Google's own Web programs, such as Gmail, rely heavily on JavaScript, Google is signaling that it's serious about building a strong, sturdy platform for cloud applications.

"There has been a steady flow of new releases and features [for other browsers]," wrote Gartner analysts David Mitchell Smith and Ray Valdes in a paper published on Mar. 13. "But the speed is not enough to keep up with Google's rising goals for Web applications." With Chrome, the company has taken matters into its own hands.

Slow Uptake of New Technology

Still, it's early days for Chrome, which is available only for PCs and currently boasts merely 1.15% of the browser market share, according to the latest figures from Net Applications. In contrast, Microsoft's (MSFT) established Internet Explorer (IE8 will likely be announced soon) has nearly 68%.

More serious for Google, as the company continues to build its case for cloud-based business apps, is the slow pace of enterprise adoption of new technology. According to Forrester Research, 60% of companies were still using Internet Explorer version 6.0 in the second half of 2008. That means they haven't upgraded to the latest version of the software they have, let alone contemplated shifting platforms altogether.

"There's a lot of effort and potential risk in moving off a browser," warns Forrester analyst Sheri McLeish. "Upgrading your company to a new browser may break customized enterprise applications." Design-focused programs build buzz and goodwill from early adopters, and a consumer-driven push could even help to drive buy-in from the bottom of an organization upwards. But McLeish notes, "Chrome is certainly not secure enough for prime time."

For now, Koblin says, he hopes the project will take on a life of its own and form a library of JavaScript experiments and a community of those constructing them—a "submit experiment" form is built into Chrome Experiments' interface. So is he volunteering to monitor the submissions personally? Koblin laughs. "The worst-case scenario of too many to look at would be wonderful."

Wednesday, March 25, 2009

evolution in music ownership and the cloud theory

How will The Cloud change the way we think about music ownership?
by Nicholas Deleon on March 23, 2009
This article is thanks to Crunch Gear

cloudmusic

One of the highlights of last week’s SXSW show, aside from seeing the Austin Crew again (hi, guys!), was when I spent some time talking to a few of the guys from Rhapsody, just like I did last year. The conversation touched a number of topics, but the one I found most interesting was the changing notion of music ownership. That is, now that most of us are at least familiar with streaming, on-demand music from pick-your-service (Imeem, Pandora, Spotify, Rhapsody, etc.), will people in the future still see music as a “thing” that they’ll own, or more like a service that they’ll tap into whenever the need arises? Will people still cling to a finite number of MP3s on their iPod, or will they prefer to have their music on The Cloud, using a device (say, the iPhone) that can call upon any song at will? A sort of, “Shoot, I wish I had that U2 song on my iPod right now” versus, “Here, let me stream that U2 song for you.” And, if people are becoming more comfortable with this type of music consumption, where does that leave traditional, download-to-own services like iTunes and Amazon MP3? The things we think about!

It’s like this: we’re right about at the point where most of us have a smartphone or other device that has a reasonably reliable, always-on Internet connection. As such, we’re right about at the point where a service—the aforementioned ones, or perhaps some new one—can came along and say, “Oh hai! You know, instead of taking your iPod with you everywhere you go, why not just connect your phone to our service? We have every song in recorded history in our database (“Cloud”), and they’re all yours, provided you pay us $15 per month. Think about it: every song ever, in the palm of your hands. That sure beats listening to the same MP3s over and over again, right?!” That’s a best case scenario, of course. While Rhapsody told me the record labels are now much easier to deal with than they were in the past—there’s still a few music executives yelling, “Go away, Internet!”—, we’re still a little bit away from having Everything Ever at our fingertips. On the technical side of things, that also assumes that our Internet connections are, indeed, sound as a pound (aside: I think that phrase needs to be updated!), something that any iPhone-using SXSW attendee will tell you isn’t exactly the case just yet.

But, for the purposes of this here article, let’s assume that all those problems have been solved. Let’s assume that the mobile Internet is fast, reliable and affordable, and that the record labels have opened up their vaults for placement in The Cloud; no technical issues remain. The only thing we have to confront now is the consumer and her listening habits: will they change? Have they already changed? Does Little Stacy, who’s currently in junior high and listens to music via YouTube and Imeem, portend an adult who won’t think of music in terms of CDs and MP3s, but of something that’s “just there,” for lack of a better term? She won’t have a personal music library, in the form of vinyl, CDs, MP3s, FLACs, or whatever; The Cloud will be her library, on which everything ever recorded will reside. The notion of “not having that album” will be totally alien to her; she has everything, always. No, she doesn’t own any of it—it belongs to the record labels, by way of your Rhapsody and Spotify (or whatever)—but it’s always available to her wherever she goes, so why should she care wether or not she “owns” it? Ownership, in this scenario, will become an antiquated concept, no longer applicable to current conditions, and Adult Stacy wouldn’t have it any other way. Nothing’s stopping Stacy from buying a physical copy of an album on some future whiz-bang format, which includes a super-de-dooper high quality copy of the album, but it would be the exception and not the rule. People still go camping (“roughing it”) even though they have fully decorated master bedroom they can sleep in.

But that describes Little Stacy, our hastily invented character who’s currently in junior high; at most she’s 13-year-old. What about Big Steve, who’s 15 years out of college and works in a spiffy office downtown? He still owns his music—in fact, he’s probably just getting used to buying songs off iTunes and the like—and the idea of songs “being there” is sorta weird to him. What if they’re not there? (Big Steve is a glass-half-empty kind of guy; blame the recession.) And even if the songs were there, why should he pay a monthly fee for an entire library of music he’ll never listen to? How is that better than having an iPod filled with only the songs he likes—he loves Danzig—without any garbage pop music getting in the way? A cynic could say, well, long-term, Big Steve is irrelevant, since he doesn’t buy new music anyway, and besides, it’s his kids whom the music industry will be targeting in a few years anyway. Let him “own” all the music he wants, since it’s only a matter of time till he isn’t even on the music industry’s radar. Of course, that completely ignores the fact that, with his fancy corner office, Big Steve has more disposable income to throw at the music industry (and the services we’ve been talking about) than Little Stacy ever will while she’s growing up. To ignore Big Steve, and all the dollar signs he represents, would be foolish. That’s not to say that Big Steve can’t still buy CDs and MP3s, of course, but the question here is whether or not people, in the future, will be comfortable with not owning music. And as people like Little Stacy use the aforementioned services, they’ll no doubt get used to it; it’ll be just another thing they do, like sending thousands of text messages per month or spending hours upon hours on Facebook.

Specific to Rhapsody, yes, you can now buy MP3s from their online store. (Even disruptive technologies and companies like to hedge their bets.) Whether or not that’s the way forward, or merely something done to placate the Big Steves of the world, is what we’re trying to determine. My guess? I hope they have house music in The Cloud.

Monday, March 23, 2009

Some outtakes from the recent SXSW conference. Nothing too earth shattering, but good too se what's being discussed

Thanks Bazzarblog

Key Takeaways from SXSWi

March 19th, 2009 by Heather Brunner | Senior Vice President of Worldwide Client Services

This post was guest-written by Melissa Lipscomb, Bazaarvoice Community Manager

Spring break in Austin means SXSW, an exciting celebration of music, film and web creativity. I spent the last few days at SXSW Interactive – the portion of the conference that’s dedicated to social media and web 2.0. Here are my top 10 takeaways:

  1. People expect conversations online. Regardless of the industry/type of site, end users expect to engage with brands and with other users on-line. It’s not enough to provide information to your customers, you have to allow them to interact with your site, with your brand, and with other customers. Of course, tools like Ratings & Reviews, Ask & Answer, and Bazaarvoice Stories build customer engagement and interaction.
  2. Customers expect brands to participate in the conversation. There was lots of discussion at SXSWi about the importance of building relationships with customers, rather than simply focusing on transactions. Responding to feedback (both positive and negative), answering questions and taking action on feedback are an important part of building credibility and trust with your customers.
  3. Customers want authenticity. Several panelists emphasized the value of brand representatives talking “like real people” not robots (or corporatebots), even (or maybe especially) in industries where we’ve come to expect corporate jargon and legalese (like financial services and insurance).
  4. Online identities are converging. OpenID and Facebook Connect are enabling greater portability/sharing of online abilities between sites. Profiles are important – people are invested in their identities online and want to build their reputations and leverage what they’ve done in one community in the other places they hang out. The most social media savvy customers are aware of their personal brands and welcome opportunities to build their brands on the sites where they shop.
  5. Mobile and web are converging. Many people access the web primarily from their phones, others switch back and forth with the expectation the user experiences will be identical.
  6. Online and offline are converging. GPS technology brings the real world into the mix in a big way (for example, your phone alerting a social networking site of your physical location, which allows your online friends to join you in the real world). Users are less likely to draw a hard boundary between their on-line and off-line lives. MobileVoice brings online UGC into the brick and mortar store, allowing customers to view reviews on their phones.
  7. Filtering and aggregating the massive amounts of data online is critical. There are too many inputs and the most valuable technologies on the web are those that allow people to personalize what they see or provide rolled-up summaries. Filtering by tag or attribute and summarizing data in tag clouds or histograms allows customers to process large amounts of information and make a decision quickly.
  8. Twitter is everywhere. Some of the most compelling and interesting conversations were happening “back channel” via twitter during the panels. Panelists took questions and responded in real time to comments that were made on the twitter stream for each panel. Fast and pithy user-generated contentin real time is incredibly appealing to many people.
  9. Bazaarvoice is ahead of the pack. Admittedly, SXSW Interactive is a social media conference, not one focused specifically on e-commerce, but our ability to measure the success of user-generated content and deliver proven ROI for our clients stands out in an environment where many panelists were unsure about how to monetize UGC or how to measure results.
  10. Bazaarvoice has a great culture! Tony Hsieh (CEO of Zappos) gave a fabulous speech about the culture at Zappos which was very reminiscent of the Bazaarvoice culture. In addition, Bazaarvoice CMO Sam Decker, hosted a core conversation on building a great corporate culture, which got lots of buzz and positive reactions.

This was a great conference; I look forward to seeing what next year brings!


Social media monitoring site

Pitch mania over so can post a bit again. Here a re-post of Techcrunch's story on yet another social media monitoring site. Although Omgili doesn;t seem that amazing iin itself, the post does have a nice list of the main tools out there.

There are plenty of ways to monitor the buzz of any given topic in the blogosphere, on Twitter, or across social networks. There is Artiklz, Trendpedia, Trackur, Brandseye, Radian6, Attentio, Buzzcapture and Chatterguard, to name a few.. Now Omgili, a search engine that focuses on forums, discussion boards, newsgroups, and Q&A sites, has just added a new buzztracker called Omgili Stream. It searches the same set of discussion sites on the Web and returns results based on how recently they appeared.

Results are not ranked by anything other than chronology, which produces an undifferentiated set of results. What I really want to know is what are the most important or influential discussions going on about any given topic. Fortunately, Omgili Stream allows you to filter results by minimum number of replies, language, and where the search term appears (in the title, topic, or replies). Another filter opens up a column with Twitter search results on the left. A unified view might be preferable, but that might then be dominated by the Twitter results. Omgili’s strength is in searching through discussion boards, forms, and the like. It sifts through 7 million such posts a day.

Omgili’s greatest strength (its focus on deep discussion sites), is also its greatest weakness. It completely ignores blog comments, for instance, where a huge chunk of discussion on the Web takes place. That is a huge oversight, in my opinion. Although, there are other sites where you can search across only blog comments, such as Backtype or Artiklz. And then what about public discussions on Facebook and other social networks?

Omgili is geared towards marketers who want to keep track of what people are saying about their products, companies and brands. Yet it returns results from only one portion of the Web. So if you are a marketer, you might want to bookmark it (consumers might be more likely to talk about product defects or other problems on a discussion board or Q&A site where they are looking for assistance from other users). But it only addresses a portion of the discuss-o-sphere.

As far as it goes, it does a decent job. One of the more helpful features of Omgili is the ability to create a buzz chart for any set of topics. Below is one comparing “IE8″ to “Gmail” and “Flip Video.”