How a Data API Enables the MeetKai Next-Generation Voice Assistant

Share
  • July 23, 2021

At MeetKai, the goal is greater than just being another voice assistant (VA). There are, after all, plenty of those, starting with Apple’s Siri to Google, Cortana, and others. To be the next-generation voice assistant, you need to be a concierge that truly understands your user’s preferences, learns them, and keeps track of context for personalized results. For example, if you’ve been looking for information on Italian restaurants recently, the search engine powering your voice assistant should remember this context the next time you’re looking for a restaurant.

However, keeping track of preferences and other relevant activity such as recent searches is no easy task. To make it work, MeetKai needs very fast access to data, and a database that’s not bogged down by the overhead of a traditional database. The need of the hour for MeetKai was Fauna, a serverless database that serves its customers with low latency through a web-native API.

SEE ALSO: Avoid the Enemy! This is a simple game made with Kree.

Kai, the name of the persona who is the concierge/assistant, keeps contextual user information in database instances powered by Fauna. As users ask questions, Kai adds additional details about preferences and user profile information in their database. Queries are routed through Cloudflare Workers and sent to this database, in a way that protects each user’s privacy under a strict policy.

Because the data lives closer to the edge and is accessed by APIs, the response is extremely fast, which is required for a digital assistant service like MeetKai. This allows the software that lives on the devices, the MeetKai Voice Assistant, to access data about users that it needs, and then search for relevant data wherever it exists. This enables MeetKai to serve their customers immediately.

Behind the Scenes

The user of the MeetKai VA sees a screen, and touches a picture of Kai to start asking questions. When a question is asked, the question is displayed on the screen, and the MeetKai app sends a query to its edge-hosted backend. MeetKai shapes its search using information in Fauna to prioritize the search results that are most likely to be what a user is looking for. The results that appear on the screen should reflect the users’ interests, and when relevant, leveraging context from recent searches by the user.

All of this data is stored in what’s called a Factors Document which contains a unique ID, serving as the primary key for keeping track of the users’ interests. For example, if a user regularly searches for barbecue restaurants, then barbecue is stored in the Factors Document in the database as being a key interest when you’re considering restaurants. If you also look for barbecue recipes, then that information is also stored in your food interests in the Factors Document.

MeetKai’s VA gets better as users ask more questions — with every new query, MeetKai adds to the preferences and relevant information in the Factors Document. Kai also asks questions to determine if its searches are meeting the users needs, and these responses are added to the list of preferences.

For example, if you ask MeetKai for a restaurant that’s nearby, you might find a suggestion that’s still 20 minutes away by car. Your responses to these questions help MeetKai understand what you consider to be “nearby” when you ask for restaurant suggestions.

Rolling it Up

MeetKai periodically queries the edge database with several rollup queries so that it can build a training dataset which is then used to build a new, more relevant, personalization model. This model is then used to create a new personalization document for each user. The new personalization document is then batch loaded into the database, where it’s available to Cloudflare Workers to get the most recent personalization document at the edge where it can be done instantly.

SEE ALSO: What is Data Annotation and how is it used in Machine Learning?

Fauna is the database used by MeetKai and is key to its operation because it keeps the core dataset at the edge and works frictionlessly with the serverless stack. With Fauna the data is always available through a web-native (http-based) API, so it scales without creating connection bottlenecks and by being at the edge — maintaining very low latency. This allows MeetKai to serve its customers high-quality responses as soon as they ask a question.

In addition, by using a database on the edge, MeetKai can provide additional services and can expand on the capabilities of its voice assistant. The preferences that are stored in the database can be available to those expanded capabilities and to the new services, and they can keep the performance benefits of being on the edge.

By using industry standard approaches through Fauna and Cloudflare Workers, MeetKai can avoid massive development delays and equally massive overhead that comes with custom, bespoke solutions. Instead, MeetKai can stay innovative by bringing the benefits of serverless development and edge computing to its customers.

The post How a Data API Enables the MeetKai Next-Generation Voice Assistant appeared first on JAXenter.

Source : JAXenter