Measure Now Assist Skill Usage: A Developer's Guide
As a developer, understanding how your team leverages generative AI features is crucial for optimization and strategic planning. When Arjun, a developer, needs to review the usage of generative AI features on his team's instance, specifically focusing on Now Assist skill usage, he needs to know precisely where to look. This article will guide him, and you, through the various options and pinpoint the most effective unit for measuring Now Assist skill usage.
Understanding Now Assist and Its Usage Metrics
Now Assist is ServiceNow's powerful suite of generative AI capabilities designed to enhance productivity and streamline workflows. It offers features like intelligent summarization, code generation, and conversational assistance. To truly gauge the impact and adoption of these features, accurate measurement is key. Arjun’s question, “Which unit should he look at to measure Now Assist skill usage?” is a common one. The answer isn't always straightforward, as different components of the platform generate different types of logs and metrics. However, when the goal is to understand the direct utilization of Now Assist's skills – the specific AI-powered actions that provide value – we need to focus on the components that directly record these interactions. Let's break down the options Arjun is considering:
Virtual Agent Logs
Virtual Agent logs are a valuable resource for understanding conversational AI interactions. They record the flow of conversations, user inputs, and the responses provided by the Virtual Agent. If Now Assist features are integrated within the Virtual Agent experience (for example, if the Virtual Agent uses Now Assist to summarize a case or generate a response), then these logs will certainly contain information about those interactions. You can see when a user invoked a conversational flow, what they asked, and what the agent (potentially powered by Now Assist) responded with. This provides a good high-level view of how the conversational AI is being used. However, while Virtual Agent logs can indicate that a Now Assist-powered response was generated within a conversation, they might not always provide granular details about the specific skill within Now Assist that was invoked or the precise output of that skill if it's not directly part of the conversational turn. They tell you that an interaction happened, but sometimes struggle to detail the exact nature of the AI's contribution beyond the conversational context. If Arjun is purely looking at the conversational aspect and how Now Assist enhances it, this is a good starting point, but it might not be the most direct measure of skill usage in isolation.
Flow Executions
Flow executions are integral to ServiceNow's automation capabilities. Flows, including those that might leverage Now Assist for specific tasks, represent automated processes built within the platform. When a flow is triggered, its execution is logged, detailing the steps taken, any decisions made, and the outcomes. If Now Assist functionalities are embedded within a flow – perhaps to enrich data, generate content for a notification, or perform a complex analysis – then monitoring flow executions can indeed provide insights. You can see which flows are running, how often, and potentially identify flows that are calling Now Assist APIs. This is particularly useful if Now Assist is used as a component within a larger automated workflow. However, similar to Virtual Agent logs, flow executions provide a broader view of automation. While they can confirm that a Now Assist-related step occurred within a flow, they may not always isolate the direct usage of a specific Now Assist skill as the primary metric. The focus here is on the automation itself, not necessarily the granular usage of individual AI skills. It’s akin to looking at the blueprint of a house to see if a particular appliance is installed, rather than checking the appliance's usage meter. If Arjun is interested in how Now Assist fits into broader automated processes, flow executions are relevant, but for direct skill measurement, there might be a more precise source.
Prompts
Now, let's delve into Prompts. In the context of generative AI and specifically Now Assist, a 'prompt' refers to the input text or query given to the AI model to elicit a specific response or action. When Arjun is looking to measure Now Assist skill usage, the Prompts themselves are often the most direct indicator. Each time a Now Assist skill is invoked to perform a task – whether it's to summarize text, generate code, or answer a question – it's typically initiated by a prompt. Therefore, analyzing the number and nature of prompts sent to Now Assist provides a very granular view of which skills are being utilized and how frequently. For instance, if a user requests a case summary, a prompt is sent to the summarization skill. If they ask for code generation, a prompt is sent to that skill. By tracking these prompts, Arjun can directly measure the invocation of specific Now Assist capabilities. This approach allows for detailed insights into user behavior, identifying popular skills, and understanding the types of tasks users are asking Now Assist to perform. It’s the most direct line of sight to understanding the active engagement with the generative AI's core functionalities. If the goal is to quantify the use of individual AI capabilities within Now Assist, focusing on prompts is paramount. This method allows for a direct count of how many times a particular AI skill was asked to do something, offering a clear metric for usage. This is precisely what Arjun needs to understand the adoption and effectiveness of his team's generative AI investments. Therefore, for measuring Now Assist skill usage directly, the Prompts provide the most accurate and granular data.
The Definitive Unit for Measuring Now Assist Skill Usage
Arjun needs to measure Now Assist skill usage. While Virtual Agent logs and Flow executions offer valuable context about how Now Assist might be integrated into broader workflows or conversational experiences, they don't isolate the direct invocation of individual AI skills. Prompts, on the other hand, represent the direct command or query sent to a Now Assist skill. Each distinct prompt signifies an attempt to utilize a specific generative AI capability. Therefore, the Prompts are the most accurate and granular unit for Arjun to examine when he wants to measure how many times Now Assist skills were used. By analyzing the prompts, he can get a clear count of each skill invocation, understand the volume of requests, and potentially even categorize the types of tasks users are asking the AI to perform. This granular data is essential for understanding user adoption, identifying areas for improvement or further training, and making informed decisions about the future of generative AI within the team's instance.
Conclusion: Pinpointing Usage with Prompts
In summary, when Arjun needs to determine the usage of Now Assist skill usage on his team's instance, the most effective unit to examine is Prompts. This is because prompts are the direct inputs that trigger the execution of specific generative AI skills within Now Assist. While other logs like Virtual Agent logs and Flow executions provide broader operational context, they do not offer the same level of detail regarding the direct invocation of AI capabilities. By focusing on prompts, Arjun gains visibility into the precise times each skill is used, allowing for accurate measurement and insightful analysis of generative AI adoption. This granular data is indispensable for optimizing the use of these powerful tools and ensuring they deliver maximum value to the team.
For further insights into ServiceNow's Now Assist and generative AI capabilities, you can explore the ServiceNow Documentation for comprehensive guides and updates.