Conversation intelligence enables organisations to gain an intimate understanding of their customers’ needs, preferences, and pain points, as well as where frontline agents are or are not meeting those expectations. With the emergence of generative AI solutions, the conversation intelligence playing field has been levelled to a certain degree. Every day, the CallMiner team are talking to organisational leaders who are curious if they could harness large language models (LLMs) to build their own in-house conversation intelligence platform.
Building an in-house solution grants organisations some amount of freedom to shape it to their objectives and requirements, and GPT makes the effort seem deceptively easy, but it’s ultimately harder than most anticipate. Innovation with LLMs and generative AI is just taking off — and unless you have the right team thinking about this day in and day out, you’ll likely be left with a less than desirable in-house conversation intelligence solution, while your competition licenses superior solutions for less.
Here are some key factors to consider:
Scope and use cases: The amount of development required for an in-house solution will largely depend on what you intend to do with generative AI. If you aim to create a comprehensive, scalable conversation intelligence system that involves continuous mining workflows, complex data processing, real-time analysis, and integration with other systems, the development effort will be significant.
Data privacy and security: Implementing robust data privacy and security measures is essential, especially if you’re handling sensitive customer data to ensure compliance with relevant regulations (e.g., GDPR, HIPAA, FISMA). This involves implementing encryption, access controls, password restrictions, undergoing annual audits, and more – which you don’t get with public LLMs.
Response quality: Another hurdle of leveraging LLMs for conversation intelligence is response quality. Ensuring the quality and relevance of responses can be challenging, as the model may occasionally hallucinate, producing inaccurate, biased, or nonsensical outputs.
Integrations: Integrating your communication systems (e.g., CRM, ticketing, CCaaS platforms, social, survey tools, etc.) with your data storage solution is critical to ensure a continuous flow of interaction data. Integrating LLMs into your existing infrastructure or applications can be a complex task. You’ll need developers to work on integrating the model’s APIs, set up data pipelines, and ensure that it interacts properly with your systems.
Actionable workflows: Designing effective workflows to extract actionable insights at scale from LLM-generated responses is crucial. While it might be easy to submit a block of text and ask generative AI to pull out findings in one-off scenarios, it’s much different (and can be incredibly expensive) to ask it to find insights at scale. Actionable workflows need to be carefully thought through and built out. Additionally, licensed conversation intelligence solutions go beyond mere conversation analysis; they bring a full suite of workflows into play, assisting in coaching, elevating agent performance, streamlining quality control, among other valuable features.
Refinement: Enterprise conversation intelligence solutions often come with years of refinement and optimization. They use a variety of specialised AI techniques (not just LLMs) to comprehensively analyse customer conversations, and that’s because LLMs aren’t the best fit for every business use case. Homegrown systems may lack the maturity and fine-tuning found in licensed alternatives, resulting in inefficiencies, as well as less accurate insights and responses.
Ongoing management: All of these activities also aren’t one-and-done processes. You’ll need to commit dedicated resources to continually evolve your infrastructure, apply new or updated security measures and governance, regularly monitor response quality, expand ecosystem connections, and more. Scalability issues may also arise as data volumes grow, leading to processing bottlenecks. It’s not just enough to build an in-house conversation intelligence solution, you also have to be prepared to maintain and improve it.
Considering these complexities, building your own in-house conversation intelligence solution using LLMs can be a substantial undertaking – both in human resources and in cost. It requires addressing various technical, ethical, and operational challenges and having a multidisciplinary team with expertise in machine learning, natural language processing, software development, and domain-specific knowledge. The development effort can range from several months to years, depending on the complexity of your project and the level of customization required, extending time-to-value.
With many conversation intelligence solutions on the market today that are incredibly advanced in their application of GPT and LLMs, organisations must carefully evaluate the intricacies of building in-house solutions against potential benefits to determine the most suitable path forward in today’s dynamic conversation intelligence industry.