Searching For A Way To The Top

November 1, 2007

By Rachel Rosmarin

You know you’re watching the early days of an industry when entrepreneurs still think they have a chance to be David in a contest with the current Goliath.

In one corner of the search engine business, you have Google with 16,000 employees and a current market valuation of $217 billion. In September, Google conducted 64% of all searches conducted on the Internet in the U.S.

Then there’s tiny Mahalo. Jason McCabe Calacanis is chief executive and founder of the fledgling search engine company, which relies on hand-compiled data. Mahalo has an undisclosed amount of venture capital funding and fewer than 100 human editors who pick and choose what information searchers receive at Mahalo.com.

Or there’s Yahoo! , a behemoth in its own right but still a distant No. 2 to Google. Not just anybody can enter the search game, asserts Vish Makhijani, Yahoo! senior vice president and general manager of Yahoo! Search. “There’s a scarcity of capital, and of talent,” he says. Yahoo! trails Google by a wide margin, but it’s easy to forget that the company brought in $1.8 billion in its most recent quarter, based largely on search engine advertising.

Makhijani believes that Yahoo! has room to grow in search, but after many months of criticism from Wall Street and consumer defections to Google, he’s reluctant to share many details of Yahoo!’s battle plan, a choice that makes Yahoo!’s public statements a bit hazy and bureaucratic.

Calacanis, by contrast, is dreaming big–and he’s not shy about bragging about it. He thinks his human-powered search engine will be a breath of fresh air to consumers sick of spammy search results offered up by Google and Yahoo!.

“No offense, but those guys in senior vice president positions at big companies don’t have the vision to realize this could be successful,” he says. “They think it can’t be done. That’s what creates a market for delusional people like me.”

Both Yahoo! and Mahalo believe there is more than enough room in search for them to both make a good living. But both companies have a long–and likely impossible–slog ahead of them if they plan to knock out Google’s Goliath.

We asked Makhijani and Calacanis to lay out their vision of the future of search, opinions on Google’s weak spots, the sustainability of search engine advertising, and the chances for tiny search start-ups.

And both men are bold enough to argue why their companies will be the search engine leader in five years.

Forbes: Google is the leader in your industry. But everyone makes mistakes. What is Google’s weakness?

Vish Makhijani: Doing search is hard. It’s billions of documents with a millisecond response time. Only a few companies can do this well. Even throwing tons and tons of money at it, like [Microsoft in] Redmond does, isn’t a formula for doing it well.

But the industry is diverging. The notion of “10 blue links” is going away. If you cover up “Google” or “Yahoo!” on the 10 blue link search-results pages, a user can’t tell the difference between us.

Jason McCabe Calacanis: The people who run Google are very smart, and they hire incredibly smart people. They’ve done a great job building elegantly simple products that users intuitively understand. When a company gets too distanced from its founders, the emphasis on product isn’t there. Yahoo! is going to have a massive resurgence because Jerry Yang is now engaged as CEO.

Google’s greatest weakness will be maintaining focus and retaining their talent. They’ve been so successful that maybe some of those talented people who’ve been there since early on don’t need to work any more.

Yahoo! started as a “human-powered” directory in 1994. Mahalo’s search results are handwritten by people. Will we see a mainstream return to edited search results?

Makhijani: It’s a notion of trust. Jerry and David built the directory, they did all the work. Then comprehensiveness became super-important, but you forsake the trust that a real person said, “We’re going to put this here.” Things are coming full circle, but a “today’s version” of Jerry and David’s directory is not going to fly.

Calacanis: In fairness, all the big guys use human gestures to determine relevance. But they don’t rank the stuff with human input. [Mahalo] will appeal to a certain group of people, and Google’s machine search will appeal to another group of people. There’s a perception in our industry and our business that winner takes all, but the truth is, there’s a lot of fragmentation.

The odds seem slim that either of you will beat Google and become the top search engine within five years. Convince me otherwise.

Makhijani: There’s nothing about search today that locks somebody in as No. 1 for years and years.

We are not the leader, and we’re going to attack and solve as many problems as we can to become the leader. We take tons of chances. We can take an iterative approach to trying new ideas. We are blessed with a really good lab-testing environment. We just launched [search advertising system] Panama, and we take a lot of that revenue and pour it back into [research and development] of our search product. Luckily, search advertising is a great business model to have.

Calacanis: I fully believe we will become one of the top three search engines. More than 1 million people visited us in the last 30 days, and we just launched. People like to surf around a directory–it is discovery. Maybe people stopped doing that on the other search engines because there’s so much spam on those sites. Our site is not really game-able.

I think we can get to that top level, and I wouldn’t be doing this if I didn’t think we could. Google is going to be very hard to displace, but it is still possible. You never know. Google became No. 1 in five years, so why can’t we? It will be a lot of hard work, and it will take five years. In that time we could get double-digit market share.

Is “search engine fatigue” a real problem? It is hard to believe that people give up on what they’re looking for so easily.

Makhijani: When you look at customer satisfaction numbers, they’re high, but if you peel the onion you see a fair amount of lack of success. We see abandoned and failed searches. We see users go check their e-mail instead. They blame themselves for bad queries when they don’t exceed. That’s garbage. There’s opportunity for improvement here.

Calacanis: Search engine fatigue is very real. Almost anything can be found, but you get into a needle in a haystack problem. And now there’s a cottage industry of people who try to intercept you as you search. We’ve done testing in our lab, and all the Mahalo users give up on searches. Some of the problem has to do with the person–it takes two to tango. There are spelling error issues and ambiguity issues. People don’t know enough to type in the words that would help.

Search engine advertising is big business, but it is getting messy. How long can that business model last?

Makhijani: A significant population doesn’t always distinguish when something is advertising or a search result. The notion of trust is super-important, and we take that very seriously. But we continue to deliver value in that advertising as well. The same team of scientists works on the relevancy of ads as on search results. And if you force lower-quality ads, people will choose another search engine. We’ve seen that. Will that business model change over time? Who knows? I see keyword advertising continuing to disrupt legacy advertising, so it will last a long time.

Calacanis: If you’re using machines, they’re going to be game-able. It is like IRS or credit card fraud. Some people will cheat. There are good “search-engine optimization” techniques, but 90% of it is black hat and smarmy. The people who are looking for loopholes and looking for ways to increase their ranking require a policing effort from Google. It is not going to end.

Source:


The Perfect Search

October 30, 2007

By Penny Crosman

If you want to find out what Brad and Angelina are up to, Google is a great search tool. Type in the celebrity names and poof, you get a list of the latest stories about the Brangelina baby-to-be. But if you need a technical or business-oriented search, Internet-style search technology doesn’t cut it. Accurate enterprise search depends on intelligent use of state-of-the-art taxonomies, metatags, semantics, clustering and analytics that find concepts and meaning in your data and documents.

The idea that the enterprise can’t be searched like the Web sounds foreign to many business executives. “Why can’t we use Google?” says the CEO. IT obediently buys Google’s search appliance, turns it on and the problem is solved. Or is it? “For some companies, Google is fine,” says Laura Ramos, Gartner analyst. But where many repositories of non-Web content and documents need to be searched or critical information must be found quickly, companies need to design searches that approximate human reasoning.

No one product can do this. But by mixing and matching the latest taxonomy, clustering, and entity, concept and sentiment extraction tools, you can get close. What’s helping is the rise of XML: As more companies realize the benefits of reading and sharing information in standard XML formats, such as RDF, ebXML and XBRL, more products roll out to convert documents, databases and other content into XML. The information provided in XML tags and formatting brings a level of intelligence about documents and content hitherto unavailable. Next-generation search technologies are taking advantage of XML formatting and metadata to provide searches informed by insider information and structure.

Structuring Content

The main trend adding power to enterprise search is the increase in semistructured information: content that has some kind of structure to it, generally through the use of metatags that describe content. E-mail, which is structured with “To,” “From” and “Subject” fields, is one example of semistructured data. XML is also expanding the universe of semistructured content as industries adopt XML schema, such as ACORD (Association for Cooperative Operations Research and Development) for XML in the insurance industry and XBRL (Extensible Business Reporting Language) in the financial services arena. Such schema help businesses exchange and analyze data in a standardized way.

Basic structure is provided by metatagging. An author or software program identifies the elements of a document, such as headline, abstract, byline, first paragraph, second paragraph and so on to modestly improve search results.

Using content structure in the display of search results is useful. If a search engine can present the headline, abstract, graphics and the first and last paragraphs of an article, the user gets a good idea of what it’s about — much better than the typical document “snippet” that’s often no use at all.

A few vendors are using XQuery, a command-oriented, SQL-like standard for creating search statements, to exploit the structure of XML-tagged content. Mark Logic, for example, converts documents and databases to XML, provides structural metatagging, and indexes the content and tags in a database where they can be mined by a variety of text analytics tools. Similarly, Siderean Software’s Seamark Metadata Assembly Process Platform converts unstructured and structured data to RDF (Resource Description Framework), generates metadata such as page title and date; and organizes the content and tags into relational tables. Entity and concept extraction can be applied to create tags, and metatags can be suggested to an editorial team, which can approve and refine them. Content and metadata are then pulled into a central repository where they can be organized according to corporate vocabularies or ontologies and mined using the tagging results.

Building Taxonomies

With metatags and some structure in place, the next logical step to improving an enterprise search is to build a taxonomy. For as long as search technology has existed, it’s been obvious that the first step toward getting something more accurate than 500,000 useless hits is to create context or navigation for the search, such as a taxonomy — a classification according to a predetermined system. A taxonomy can be as basic as organizing documents by month or client, or it can be a sophisticated scheme of concepts within topics.

“Categorization lets you sharpen the search and do concept-based retrieval as well as browsing,” says Sue Feldman at IDC. “It lets a user answer questions that can’t be answered by search alone, such as, ‘What’s in this collection?’ or ‘I’m interested in going on a vacation, but I don’t know where; what are some interesting places?'”

With a taxonomy in place, users can browse through categories and discover information they need but didn’t know how to look for (indeed, few people understand how to write effective search queries or ask the right questions of a search engine). The tricky part is deciding who will build the taxonomy. Who is willing, able and blessed with sufficient free time to decide what the structure should be and where each new piece of content fits in?

The most straightforward answer is to have authors categorize and apply the proper metatags and keywords to their content. Publishers of magazines and technical publications, for instance, take a structured-authoring approach using marked-up templates. But this laborious practice is not for everyone, and in a typical company, most users lack the time and inclination to fill out forms describing each document.

A more lightweight method of categorizing, called “folksonomy,” is becoming popular on the Web, where sites like Flickr and Del.icio.us provide those submitting photos or lists with easy-to-use tools to annotate their content. “By combining annotation across many different distributors, you gain insight into useful information and get around some [of the] problems with more traditional approaches to metadata management,” says Brad Allen of Siderean.

With an active community of users assigning categories and metatags, valid new terms, initiatives and projects are easily added to the existing taxonomy, making it more dynamic than a rigid taxonomy created by a librarian. “It’s sloppy and it’s chaotic, but the degree to which it improves precision in the retrieval process can be quite significant,” Allen says.

Formal taxonomies are usually created by a librarian or cataloger trained in library science. This can be effective, but it’s expensive, time-consuming and hard to keep up-to-date.

Sometimes Web masters help determine relevancy. Google’s search engine creates page ranks based on how frequently people link to a given piece of content. The downside to this is that most companies’ documents and data sources have little or no record of content linking. “That this is lost on most people is a triumph of branding and makes page-rank-free Google somewhat akin to caffeine-free Jolt as a product,” says Dave Kellogg, CEO of Mark Logic.

Clustering tools, such as those from Engenium or Vivisimo, create an ad hoc taxonomy by grouping search results into categories on the fly (search engines from Inxight Software and Siderean also cluster results). With clustering, a search for the term “life insurance” on an insurance company’s site would display results grouped under headings such as Whole Life, Term Life and Employee Benefits. It’s a fast and efficient way to categorize content, but it’s not always accurate; there’s no consistent set of categories, and the results can be strange because there’s no human involvement.

Combining Search Tools

The next step to intelligent search is to apply text analytics tools. Several small companies are providing analytics software for entity, concept and sentiment extraction.

Sentiment extraction, or sentiment monitoring, the newest of these tools, tries to identify the emotions behind a set of results. If, for example, a search uncovers 5,000 news articles about the Segway, sentiment extraction could narrow the set down to only those articles that are favorable. Products from Business 360, Fast, Lexilitics, NStein and Symphony all provide sentiment extraction. IBM has layered NStein technology on its OmniFind enterprise search platform to support “reputation monitoring,” so companies can know when their public image is becoming tarnished.

Entity extraction uses various techniques to identify proper names and tag and index them. Inxight and ClearForest are the two leading providers of entity-extraction software, and many search tools embed or work with their technology.

Concept search tools put results in context, as in Paris the city versus Paris the person or Apple the company versus Apple the fruit. These tools use natural-language understanding techniques to make such distinctions. Autonomy and Engenium are two vendors of concept search software.

Adding a Backbone

Assuming you need more than one search technology, how do you knit disparate solutions together? IBM’s answer is Unstructured Information Management Architecture. Recently published on SourceForge.net, UIMA is an XML standard framework whose source code is available to third-party search technologies. It acts as a backbone into which text analytics and taxonomy tools can be plugged.

UIMA may sound like a gimmick to promote IBM’s OmniFind enterprise search product, but because its business is driven by services more than software, IBM is willing to pull in other, sometimes competing applications. “No single vendor can address all analytics needs or all requirements to understand unstructured information,” says Marc Andrews, director of search and discovery strategy. “Companies need different analytics for different sets of content; [what’s] relevant to the life sciences community will not be relevant to the financial services industry. And even within an organization, the analytics relevant to warranty claims and customer service data will be different from the analytics relevant to marketing, HR and generic interest.”

UIMA provides a common language so search results can be interpreted by different applications or analytics engines. The framework defines a common analysis structure whereby any content — whether it be an HTML page, a PDF, a free-form text field, a blob out of a database or a Word document — can be pulled into a common format and sent to a search tool. Results are fed back into the analysis structure and passed along to the next search tool. The final results are output in a common format that any UIMA-compliant application can use.

Can UIMA become a universally accepted backbone that holds search tools together? Some think UIMA is on its way to becoming a de facto standard. So far, the Mayo Clinic, Sloan Kettering and the Defense Advanced Research Projects Agency are adopting the framework, and 15 vendors, including Attensity, ClearForest, Cognos, Inxight, NStein and Siderean, have agreed to make their search tools UIMA-compliant.

In a case of co-opetition, Endeca will support UIMA in an upcoming release of its enterprise search software even though the company competes with iPhrase, which was acquired last year by IBM. “UIMA will uncomplicate the world,” says Phil Braden, Endeca’s director of product management. “As more and more people adopt UIMA as the standard for how structured and unstructured data is supposed to look and how these components are supposed to integrate, it becomes that much easier to pull data from these different systems into Endeca.”

There’s little to challenge UIMA other than a couple of XML initiatives that also address the standardization of data formats for search engines. One such initiative is Exchangeable Faceted Metadata Language, an open XML format for publishing and connecting faceted metadata between Web sites, but that standard doesn’t have the momentum of something being pushed by IBM.

Not every company, of course, will go to all the lengths described here to architect accurate search. For some, keyword search and placement of documents in well-labeled electronic folders will suffice. The sophisticated search pioneers are e-commerce sites, pharmaceutical companies and government agencies, which have the most to gain: greater sales, faster drug development, detection of terrorist activity. Call centers are getting search makeovers so that multiple search tools can mine unstructured content and databases together and give reps all the information they need to close calls. What could broader and more accurate searches achieve in your company?

Source:


Searching For The Better Ways of Finding Things on Net

October 29, 2007

By David H. Freedman

Tui Stark is searching for a vacation paradise and can’t find it. Googling “snorkeling beaches blue water” turns up listings for scuba diving, real-estate firms, rafting outfits. So Stark, a photography stylist in Needham, Massachusetts, turns to Quintura, one of many upstart search engines, which allows her to focus the results on snorkeling. “The Google results just had too much stuff I wasn’t looking for,” she says. “I wanted to zoom in on the best snorkeling beaches.” And within seconds, Quintura delivers.

That’s a bad result for Google, which is more vulnerable than you think. By virtue of dominating Web search—Google draws 60 percent of all searches worldwide, says market-research firm comScore, with Yahoo a distant second at 14 percent and mighty Microsoft limping along at 4 percent—Google has not only become the reigning heavyweight of the online world, but it has also transformed advertising, riled governments and sent tremors through Wall Street. As of last week its stock was valued at $200 billion, more than five times that of Yahoo’s, and nearly three quarters of Microsoft’s. Now it’s threatening to shake up the trillion-dollar corporate-computing and wireless-communications markets.

Despite spending billions trying to diversify beyond the straightforward search offered on its stripped-down, almost childlike home page, Google reaps about 60 percent of its outsize revenues and more than 80 percent of its profits from ads on that page, according to analysts’ estimates. That means the company’s success continues to hinge on the dominance of its simple search. There are no guarantees that its dominance will last. It is threatened by a massive worldwide effort to build a better search, involving giant high-tech rivals, governments in Europe and Asia, and hundreds of tiny start-ups founded by academic wunderkinders much like Sergey Brin and Larry Page, the Stanford graduate students who founded Google in 1998. And it’s also dependent on an online public that may make up the most fickle market in history, an audience whose interests are already showing signs of wandering outside the search box.

Google may well be able to continue its charmed life by holding onto its search lead and getting its non-search businesses to kick in more profit, and Wall Street is certainly betting that way. But the computer world has a way of bringing seemingly golden brands down to earth with surprising speed, as Lotus, Novell, AOL and other firms have discovered. It’s not farfetched that five years from now we may wonder why everyone thought Google was such a big deal. “Google has won the first stages of the Web-searching race,” says Trip Chowdhry, an analyst with Global Equities Research in San Francisco. “It won’t win the next one.”

History shows how quickly search leaders can lose their way. The race kicked off in 1995, when researchers at Digital Equipment Corp. (remember them?) figured out how to store the words on Web pages as an index that lent itself to lightning-fast searches. The resulting AltaVista search engine quickly became a favorite home page for early Web users, and seemed destined to rule search. But in 1998 word started getting around about a new search engine from a tiny company with a goofy name that sometimes returned more-useful results, and by 2000 Google was the search engine to beat. Yahoo, with a stunning lack of foresight, put Google’s search box on its home page that year, burnishing Google’s reputation with Yahoo’s tens of millions of users. Microsoft, caught napping, wouldn’t even enter the search-engine race in earnest for another three years. When Google tweaked its business model by linking ads to searches and charging advertisers only when searchers clicked on them—an approach it copied from rival online marketing firm Overture—it converted its search box into a money machine. Right now that machine is producing $15 billion a year, of which almost $4 billion is profit.

If Google has been able to crush its search competition, it’s not because it has perfected the art and science of Web searching. Far from it. Google is what the industry calls a “second-generation” search engine. First-generation engines like AltaVista found Web pages containing words that matched the user’s search words. Google’s innovation was to further rank a Web page by the other pages that link to it, on the somewhat shaky assumption that if a page is much-linked-to, it must be useful. Charles Knight, an analyst who runs the AltSearchEngines Web site, notes there’s a plethora of good ideas for what a third-generation engine might bring to the party, and no shortage of companies trying to prove those ideas. “Each has shown they can do some aspect of a search better than Google can,” says Knight.

Yahoo, for one, has been frantically working to leapfrog Google. One new feature of its engine provides search-term suggestions that pop up as soon as you start typing your query—a possible antidote to the frustrating process of having to keep repeating a search with different terms in order to find helpful results. (Google reminds you of searches you’ve previously typed in.) Another offers shortcuts to following up on certain types of popular searches. Typing in a movie title, for example, brings up a trailer and local showtimes; typing in “restaurants” and a city narrows down the choices by neighborhood, cuisine or popularity. More is coming, says Vish Makhijani, head of search for Yahoo. “We’ll know when you’re ready to make a reservation versus when you’re just doing research, and we’ll let you make the reservation right there on the search page,” he says.

Microsoft, too, is eager to provide new ways to merge its Windows Live Search with other online and PC-based tasks. So far the company hasn’t taken advantage of the dominance of Windows to drive search traffic its way, but that will change, says Microsoft’s search chief, Brad Goldberg. “We’ve just begun integrating search in a meaningful way with our assets,” he says. “We’re working on ways to capture what the user is doing and carry it into the search experience.” In theory, that could mean a Microsoft search on “Coke” would give an accountant financial information on the Coca-Cola Corp., while a student writing a term paper on health and diet might get the nutritional rundown on a can of soda.

In fact, the biggest competitive hurdle for Yahoo and Microsoft is not that their searches don’t work as well as Google’s, but that people just don’t try them as often. According to a recent Nielsen/NetRatings survey, the gap between Google, Yahoo and Microsoft narrows when you look at the percentage of users of each site who keep returning—79, 69 and 65, respectively. A University of Michigan study released in August shows that Yahoo passed Google in customer satisfaction in the past year.

Google has already been relegated to also-ran status in several key markets worldwide. It gets less than 2 percent of queries in Internet-happy South Korea, and 17 percent of the queries in China, the world’s most important emerging online market. The company has also been trounced by local competition in Russia. Google dominates searching in Western Europe—82 percent of queries come its way in Germany—but the German and French governments plan to put up $165 million and $122 million, respectively, for search-engine research. In Japan, not only is Google running behind Yahoo, but the government is reportedly pumping some $125 million into local search efforts. Meanwhile, notwithstanding rumors of a forthcoming phone, Google hasn’t yet established leadership in the mobile-phone search market, expected to be lucrative.

Yahoo, Microsoft and governments aren’t the only ones seeking a cure for Google envy. In 2005 and 2006, venture-capital firms injected $350 million into 79 search-related start-ups. Knight tracks no fewer than 1,000 search contenders, mostly U.S.-based, that have something to recommend them. Among the features that he and other experts believe might be hallmarks of a third-generation search engine:

Word smarts. Some search engines, like Hakia, the forthcoming Powerset and Sydney-based Lexxe, are trying to go beyond matching your exact query words—they seek to get a sense of what you’re looking for and pull up the best pages based on an understanding of their content. “In most cases the document you want won’t contain all your search terms,” notes Rohini Srihari, a University of Buffalo computer scientist and CEO of Janya, an Amherst, New York, company specializing in searching for counterterrorism leads. “And if you’re looking to discover who or what has suddenly become a hot topic, you won’t even know what search terms to use.” A smart search engine might know that when you plug in “Paris,” “Tokyo” “New York” and “hottest restaurants” that you’re looking for popular new restaurants around the world.

Editing. No matter how clever a computer program, it will never match a human brain for determining quality and relevance. Some new search engines, including Mahalo and ChaCha, rely in part on human editors or guides to pre-cull the most relevant pages for some searches. You’ll probably get more select results than on Google—but only if your search terms are among those the editors have explored.

Focus. Google searches everything—but you don’t want everything. You’ll actually get more relevant results with a search engine that indexes a much smaller number of pages, as long as the pages are on-topic. Trulia searches out homes for sale, Healthline lets you plug in symptoms to track down possible causes and treatment, Globalspec’s searches are aimed at industrial engineers, Like.com offers pictorial product searches, and Spock specializes in information about people.

Guided queries. It’s hard to guess which search terms will do the best job, but some search engines help by suggesting terms, as do Yahoo and a start-up engine called Accoona, or by grouping results into categories that focus on the desired topic, as do Ask.com and Clusty. Type “spears” into Ask.com, for example, and it will suggest you steer the topic in either the pointy or pop direction; Google just mixes them up. A number of cutting-edge engines, including France’s KartOO, KooltTorch and Quintura—founded in Moscow and now based in Virginia—display the categories in graphic maps that visually suggest which categories are likely to be the most useful.

Community. NosyJoe, Squidoo and Sproose allow other users to help determine which pages are most useful, cutting down on the often irrelevant and spam-ridden results that come up via Google’s link-counting approach. Wikia, which has ties to the online, everyone-can-chip-in encyclopedia Wikipedia, is working on a search engine based on user contributions, and the Web-page bookmarking service Del.icio.us, bought by Yahoo in 2005, allows searching through everyone else’s labeled bookmarks to find relevant pages.

Right now all these underdog search engines (except Ask.com, the No. 4 search site) have a combined share of less than 5 percent of all queries, according to Knight. But even if one or more of them starts to gain traction, does Google really have to worry about being bested by some obscure search engine, given its longstanding, widespread popularity? After all, Microsoft continues to dominate software, in spite of persistent claims that better alternatives like Apple and Linux are out there. Google’s dominance, however, is different from Microsoft’s. The costs of dumping Windows can be intimidating, between setting up new hardware or software, retraining and lost productivity. What’s to keep someone stuck on Google? “The moment someone proves themselves better than Google, people will switch in a heartbeat,” Srihari says. Just ask anyone who was at AltaVista in the late 1990s.

Google isn’t waiting around to be AltaVistaed. Its smaller challengers can’t hope to match the company’s massive investments in computing infrastructure, said to include more than 450,000 servers. So be prepared to wait an annoying three seconds or so for results on some of the wanna-be search sites, compared with Google’s blink-of-an-eye speed. And with $12 billion cash on hand, Google can buy hot companies that pose a threat, much as it plopped down $1.65 billion last year for YouTube, whose video search crushes Google’s popularity. “Google was buying tiny search companies at the rate of two per week at one point,” says Knight.

Even $12 billion and the billions more Google could borrow wouldn’t buy all the world’s search competition. The performance gap won’t be hard to narrow for a hot new company freshly fueled by investors. In the end, Google has to have a better search to stay on top. Thus its army of software engineers is looking at every wrinkle in search, insists Google’s research director, Peter Norvig. “I guess we’re paranoid,” he says. They’ve already injected several new technologies into its search—for example, results take into account results you’ve clicked on in the past, provided you’ve signed up to have your searches tracked. You can type in your query in plain English, get suggestions for search-term refinements, or do any of more than 40 specialized searches, including movies, government Web sites, patents, airline flights and human faces. Google just doesn’t advertise any of these features, or make them plain. Although it’s clear Google is capable of plenty of search innovation, there’s a reason the company sometimes acts as if its hands are tied when it comes to implementing next-generation techniques. “People don’t want radical change from us,” says Matt Cutts, head of search quality at Google. “Our biggest task is ensuring simplicity.”

It’s true, most mainstream searchers do tend to value the stripped-down, no-brainer elegance of a thin box that takes a few words and delivers straightforward results. Given that a growing number of queries are being funneled to alternative engines, there are clearly plenty of power searchers willing to accept a little complexity in return for better results. It wouldn’t take a smash-hit new search engine to steal Google’s thunder; the damage could take the form of a slow leak of searchers to a variety of engines that each have some special appeal. Another threat to Google may be online social networking sites such as MySpace and LinkedIn. “We’ll likely see dozens or hundreds of specialized search engines that collectively chip away at Google’s dominance,” says Brant Bukowsky, founder of Plus1 Marketing, a search consultancy.

Last quarter, Google raked in $925 million in profit, 28 percent more than the same quarter last year. The game is still Google’s to lose. Even Stark, who resorted to Quintura to find her snorkeling beach, still makes Google her first stop when she needs to track down a Web site. What, after all, would Google have to fear from a tiny company with a goofy name that sometimes returns more-useful results?

Source: