Rethinking: Are LLM chat interfaces really what users want for search?

Why are LLM chat interfaces being used to display search results?

With the rise of AI-powered tools, many apps are now using large language models (LLMs) in chat interfaces for search. But I question if this is what users truly want.

Search, as we know it today, is efficient and familiar. The search bar model—starting broad and narrowing down—has been refined over decades. Users value control, transparency, and trust. Semantic search already enhances this experience by understanding intent and context, providing relevant results without the need for a lengthy conversation.

LLM chat interfaces, while promising, often demand more effort and lead to less accurate results. They’re prone to hallucinations and lack the clarity of traditional search. Users are forced to type more for potentially misleading information, losing the quick refinement and feedback loop that makes traditional search so reliable.

What do users want?

I can quite confidently say, most users aren’t looking for a conversation with a chatbot when they need information. They want speed, accuracy, and trustworthiness. They want to refine their search quickly and transparently. LLMs, when used as a replacement for traditional search interfaces, often detract from these qualities.

Semantic Search

Before jumping on the LLM bandwagon, it’s worth asking: is this really the best interface for search? Semantic search, for instance, remains an underutilized powerhouse. With natural language processing (NLP) techniques, semantic search understands intent and context, providing more relevant results without the added complexity of a chat-based interaction. It still gives users control—offering rich, accurate results that are easier to sift through with the added flexibility to refine or expand as needed.

The beauty of semantic search lies in its balance of power and simplicity. It doesn’t discard the well-established user flow of search bars and result lists but enhances them. Users see the results first, can evaluate them quickly, and still have the freedom to refine the search. In fact, using LLMs to summarize search results after presenting them could be a way to leverage their capabilities without losing transparency or user control.

Rather than replacing search interfaces, we should consider using LLMs to augment them. For example, LLMs can summarize search results after they’ve been presented, adding value without sacrificing user control or trust.

In the end, users want speed, accuracy, and efficiency—not a complex conversation with a chatbot. By focusing on enhancing existing search systems, we can build better, more intuitive tools that truly meet user needs.

2 thoughts on “Rethinking: Are LLM chat interfaces really what users want for search?”

Leave a comment