The Extract Tags action lets you pull structured values out of a conversation using a custom LLM prompt. It’s ideal for extracting named entities, customer intents, product names, or any information you want to pass into downstream tools.

💬 This action works best in chat mode and is designed to process the most recent user message or full conversation.


🔍 What It Does

It sends your prompt to the LLM and expects a structured result in return — usually a comma-separated or JSON-style response that can be saved into a parameter.


🖼️ Action Interface


⚙️ Configuration Options


🎯 Use Cases

  • Extract customer issue tags for routing
  • Identify urgency or sentiment from a request
  • Convert a sentence into variables for logic or filtering

🧠 Tips

  • Use specific instructions in your prompt like: “Return tags in a comma-separated list” or “List them as JSON fields”
  • Combine with Set Current Flow, Function, or Return Value actions for dynamic control

Need help writing extraction prompts or chaining this into a workflow? Just ask!