How Twig's AI Co-Pilot Supports CX Teams

An in-depth review Twig's architecture and AI-model

Up-to-date AI trained on Private Data

  • Data Pipelines: Twig refreshes data from your private data sources weekly or at the frequency that has been configured. This is critical to keep the AI up-to-date.

  • No Engineering Time Needed: Twig can be set up from the UI by the CX team without any need for engineering time. This is generally a hurdle with implementing AI products.

  • Personally Identifiable Information Filters: Data sources like ticketing history often have confidential information like names, emails, zip codes, etc. Twig is able to identify and remove this information before using the data to train the AI.

Deterministic AI

  • Retrieval Augmented Generation: Twig implements a method called Retrieval Augmented generation. This method has been proven to reduce AI hallucinations generally associated with generative AI. We pre-index customer data, and when we receive a customer question we use semantic similarity to find information that matches the question. Then we generate an answer from the limited set of information chunks we retrieve. This method has a few key benefits

  • No Hallucinations: Answers are generated from retrieved sources and not generic internet corpus

  • Traceability: You can trace an answer to its source, often these sources are shown in citations. Which increases the confidence in the answer. The links can also be shared with your customers in addition to the answer

Human Control

Enterprise teams want to be able to control/refine/alter what the AI says, often this is because there is information the team knows that may not be clearly documented in documentation. There may also be nuances that humans understand.

  • Personal-Based Behavior: Twig allows users to define their persona, which includes how to answer a question (Short/Long/Bulleted answers). Which data source types to consider when answering? And what tonality to use

  • Fine-Grained Control: Twig allows users to regenerate an AI response, by un-selecting or adding a data source to the sources that are considered when generating a response. This level of fine-grained control helps users make the AI work for them in a deeply connected way.

  • Training and Semantic Cache:

  • Twig remembers when a user marks a question as accurate or makes an edit to the response. When a future user asks a question that is similar the response is fetched from its smart semantic cache. This allows users to create fine edits to AI responses.

Data Limitations

  • Synthetic Data Engine:

Data sources like support tickets from SFDC and ZenDesk have a problem in that they are very low in the density of information. Unlike Documentation and knowledge bases. This low-density data reduces the overall AI response quality. Twig's Synthetic Data Engine is able to extract questions and answers from conversation streams. This new data is used to improve AI quality while the conversation stream can be discarded.

  • Data Marketplace:

Most products today do not exist in isolation, They exist in an ecosystem of other products, libraries, APIs, and platforms. When these other platforms are not considered AI stops at the boundaries of data available in the customer's documentation. This prevents AI from going deeper and answering l2/l3 questions. Increasing the number of times CX has to reach out to engineers or domain specialists. Twig's data marketplace makes it easy to subscribe to data corpus from adjacent products in the ecosystem. Making the AI smarter about the customer's product and domain.

Custom AI Models

  • Deep AI Models: Twig is creating a family of AI models trained on OpenSource Large Language Models. These models do specific tasks that augment the AI capabilities within Twig. These models are proprietary and available only to Twigs Customers.

  • Twig Synthetic Knowledge Extractor: This is one of the models in the TwigLM family. This model is used in synthetic knowledge extraction.

Last updated