How will LLMs evolve conversational search?
How will generative ai evolve search ad monetisation
LLMs and generative AI are evolving the process of search to become more conversational. But what does this mean for search as a tactic as it evolves? And what about the 250 billion dollar search ad industry it supports?
How will advertisers adjust their advertising to conversational search?
To get started, let’s first understand what conversational search is and how LLMs can evolve the conversational search experience.
Conversational search refers to the ability of search engines to understand and respond to user queries in a natural, dialogue-like manner, rather than relying solely on traditional keyword-based searches. LLMs, with their advanced capabilities in understanding and generating human-like text, can play a pivotal role in enhancing conversational search.
LLMs are trained on vast amounts of text, enabling them to understand the context better than traditional models. For instance, if a user asks, "Who directed the movie with blue people in a forest?" an LLM can infer that the user is referring to James Cameron's "Avatar". This deep contextual understanding allows for more intuitive search experiences.
Human language is often ambiguous. LLMs can use context to disambiguate queries. For example, the word "apple" could refer to the fruit or the tech company. Based on previous interactions or related queries, LLMs can determine which meaning the user is more likely referring to.
Users often have follow-up questions to their initial queries. LLMs can remember the context of the conversation and provide relevant answers to subsequent questions. For instance, after asking about the capital of France and getting the answer "Paris", a user might then ask, "What is its famous landmark?" An LLM can understand the connection and respond with "Eiffel Tower".
Instead of just returning a list of links, LLMs can provide direct answers in complete sentences or even paragraphs. This makes the search experience more interactive and user-friendly, especially for voice-based searches where reading a list of results isn't practical.
If a user's query is too vague or the LLM isn't certain about the user's intent, it can ask clarifying questions. For instance, in response to a query like "Tell me about Apple's latest product," the LLM is capable of asking, "Are you referring to the latest iPhone, iPad, or MacBook?"
LLMs can be integrated with other AI models that process images, videos, and audio. This will allow for a seamless conversational experience across different modalities. A user could, for example, show a picture and ask, "Who is this actor?" or play a song snippet and inquire, "Which song is this?"
By analysing a user's search history, preferences, and previous interactions, LLMs can tailor responses to individual users. This will ensure that the conversational search experience is not just generic but personalised to each user's context and needs.
And finally, LLMs can be designed to learn continuously from user interactions. This means that the more users engage in conversational search, the better the LLM becomes at understanding and responding to diverse queries.
The integration of LLMs into search engines is inevitable and it will massively change the way users seek and receive information online. By making search more conversational, intuitive, and context-aware, LLMs will bridge the gap between human-like interactions and digital information retrieval, setting the stage for a new era of search technology.
But then how does one monetise it?
The UX, design and interaction flow of conventional search today is based on surfacing links. People can search an index of links to find the information they want. In return, advertisers can promote links and promote their products in those links. It’s an integrated flow of links.
But what happens when a generative AI model answers in detail where there are no links or the ability to check other possible answers? To monetize this user flow, one probable option would be for the Generative AI model to recommend a product associated with the search query. And yes theoretically advertisers can “sponsor” this promotion (with the AI stating a disclaimer as such)
But then how do users understand if the product is any good? LLMs capture petabytes of data to summarise answers from various sources while users have a habit of checking various sites, videos, content, blogs, reviews, user testimonials and ratings before making a purchase.
Users actually want to research diverse viewpoints.
In the link world of advertisement, this entire user flow is integrated. We search for links, we click on links to read, we get served promoted links, we research other links to find what others are saying about a particular product and then we click on a link to make a purchase.
It’s an end-to-end click-based user interaction and behavioural flow.
Can this change to an entirely conversational flow where a series of prompts (text or voice) deliver the responses to our specific queries? In theory for sure, as LLMs get internet surfing and indexing capability in real time. They would still need to index content to serve answers, aid in discovery and provide varied inputs but since this content will update in real-time it would mean that only the curation, collation and delivery would be fast-tracked by conversational LLMs.
That would thereby probably signify that the monetisation means would also have to evolve to become conversational. Now conversational ad formats are not entirely new. The only problem is that they are not very good as of this moment, but let’s take a step back to see what we already have.
Digital advertising has always evolved adapting to both technological advancements and changing user behaviours. It has already seen a shift from static, one-way interactions to dynamic, two-way conversations. This transition is evident in messaging apps, platforms like WhatsApp, Telegram, and Facebook Messenger which have billions of users, indicating a preference for chat-based interactions where bots have been used for commerce.
Devices like Amazon's Alexa, Google Assistant, and Apple's Siri have popularised voice-based conversations. Though monetisation has proved to be a challenge.
Given these trends, it's logical that advertising has already tried for a conversational approach and some ad formats which support this are already there.
There are interactive ads that engage users in a dialogue, rather than just presenting static content.
They can:
- Ask users about their preferences
- Provide personalised product or service recommendations based on user responses
- Answer queries or address concerns in real-time
By engaging users in a dialogue, advertisers can tailor their messages based on real-time feedback, ensuring relevance. Such interactive conversations can hold users' attention longer than traditional ads, leading to better brand recall. Contextual mapping of conversations can also provide valuable insights into user preferences and behaviours, aiding in future marketing strategies. Instead of redirecting users to a website, conversational ads can facilitate instant purchases or sign-ups within the chat interface.
Currently, faceless chatbots have been deployed in various use cases to drive this user flow but in most cases, they have ultimately given users more links to actually move the conversation forward.
The multimodal integration hasn’t always been that good.
But this will evolve as the capability of LLMs to provide pin-point accuracy improves and as they get the ability to summarise real-time information indexed across the web
Now this isn’t as simple as it sounds. Vector embeddings within the LLM would have to work in curating new data sets and align with them in real-time. It would need a lot more computational power, context mapping, semantic alignment and a drastic reduction in LLM hallucinations. AI-powered chatbots can be much better integrated into display ads, initiating conversations when users click on the ad. These bots can guide users, answer questions, and even facilitate transactions by using digitised IDs integrated with voice samples and touch.
But we are not there yet, because for this to happen, merchant, monetisation and payment platforms would also need to evolve at the same pace while taking care of security, privacy and fraud.
Voice-based assistants like Alexa, Siri and Google burst into the scene with great excitement but failed to live up to the hype because the ecosystem for them to thrive wasn’t there. They have mostly been used for entertainment and some basic tasks till now. Users haven’t really taken to conversing with these products all day and I doubt if that behaviour will change anytime soon. However, if the integration and semantic context search is improved based on our behaviour, voice-based conversational ads can engage users in these platforms through auditory dialogues. This would raise many questions around privacy, data collection practices and security so that would be a barrier that would need to be addressed.
Human beings will always love some bit of anonymity and for data-hungry algorithms, anonymity is a problem.
The same goes for AR. Confined mostly to niche use cases till now, augmented Reality (AR) Interactions can be made better by combining AR with conversational ads to allow users to virtually "try" products and ask questions, enhancing the interactive experience. It’s not that this functionality is not there today. It is. Go to any online spectacle shop, and you can “try out” glasses, but the experience is not that immersive or convincing.
For users to adopt this, the crux will depend on the ultimate end-state user experience and how easy or intuitive it is. For advertisers, if all this innovation does not sell something for someone somewhere, the technology will be ahead of its time and organisations will pivot to what drives sales.
Overtly aggressive or intrusive conversational ads will deter users. It will be crucial to strike a balance between engagement and respect for user space.
A space which we know very little about.
Algorithms are not very good (yet) at analysing mood, aspiration and the difference between need and desire.
There is no doubt that LLMs are here to stay. How the content, e-commerce and payment ecosystems evolve to leverage them is where monetisation methods will change. Integration with IoT devices, emotion recognition and multimodal transitions will be some of the key factors to watch out for.
Just like the iPhone created the app industry and that became part of our everyday lives, LLMs are just the foundational element in how search and online transactions change over the coming years. Unbiased recommendations based on indexing petabytes of data generated every day is not an easy task, but technology has always evolved.
It’s the timing that will be important.