AI Monitoring: A New Solution for AI-Powered Applications by New Relic

AI-powered applications are becoming more common and complex, requiring engineers to have a comprehensive understanding of the performance, quality, cost, and ethical implications of their AI workloads. To address this challenge, New Relic, the leading observability platform for every engineer, has launched AI monitoring, a new APM solution that offers unparalleled visibility and insights into AI-powered applications.

What is AI monitoring?

AI monitoring is a new APM solution that enables engineers to monitor, troubleshoot, and optimize their AI-powered applications with ease. It provides end-to-end observability across the AI application stack, from the application layer to the infrastructure layer to the AI layer, including LLMs (large language models) and other AI components.

It supports more than 50 integrations and quickstarts with popular AI tools and frameworks, such as LangChain, OpenAI, PaLM2, HuggingFace, PyTorch, TensorFlow, Amazon SageMaker, AzureML, Pinecone, Weaviate, Milvus, FAISS, Azure, AWS, and GCP. With this, engineers can access a single view to see all the relevant metrics and data for their AI-powered applications, such as response quality, tokens, response time, throughput, errors, and cost.

It also provides unique features like LLM response tracing and model comparison, which help engineers troubleshoot and optimize their LLM prompts and responses for performance, cost, security, and quality issues. Engineers can trace the lifecycle of complex LLM responses built with tools like LangChain and identify and fix problems such as bias, toxicity, hallucination, and fairness.

They can also compare the usage, performance, quality, and cost of different models in a single view and optimize their use with insights on frequently asked prompts, chain of thought, and prompt templates and caches.

Why AI monitoring?

AI monitoring is designed to help engineers build and run AI-powered applications with confidence and responsibility. It helps engineers to:

  • Improve the performance and quality of their AI-powered applications by identifying and resolving issues quickly and efficiently.
  • Reduce the cost and complexity of their AI-powered applications by optimizing the use of resources and models and avoiding unnecessary expenses.
  • Ensure the responsible use of AI by verifying that their AI-powered applications are safe, secure, and ethical and comply with the upcoming AI regulations.

How to get AI monitoring?

AI monitoring

AI monitoring is now available in early access to New Relic users across the globe. Users can request early access by signing up, which is included in New Relic’s simplified, consumption-based pricing. It is easy to set up, as New Relic agents come equipped with all its capabilities. Users can start monitoring their AI-powered applications in minutes and enjoy the benefits of AI monitoring.

It is a new and innovative solution for AI-powered applications that provides end-to-end observability and insights across the AI application stack. It helps engineers build and run AI-powered applications with confidence and responsibility. It is the latest offering from New Relic, the all-in-one observability platform for every engineer. To learn more about AI monitoring and request early access, visit New Relic’s website.

How does it compare to other APM solutions?

New Relic’s AI monitoring is a new APM solution designed specifically for AI-powered applications. It provides observability and insights from start to finish across the AI application stack, including LLMs and other AI components. It also includes unique features such as LLM response tracing and model comparison to assist engineers in troubleshooting and optimizing their LLM prompts and responses for performance, cost, security, and quality issues.

Other APM solutions are more general in nature, with a focus on monitoring the performance and health of applications regardless of whether AI is used. They typically provide metrics and data on the application, infrastructure, and, in some cases, database layers.