With Robotic Data Automation (RDA) Observability Pack you can automate essential Observability activities like provisioning of data collection modules, sharing data with data lakes, generating analytics, customizing reports and more.
Example Use Cases
Build & run Observability pipeline to collect metrics, logs and traces for a cloud native application
Build & run observability pipeline using software bots to collect time-series performance metrics, events or logs, and application traces for cloud-native application stacks (ex: microservices/containers) running on any public cloud environment, leveraging cloud integration bots and open instrumentation agents.
Unified Open Observability for any Traditional Application Stack
Build & run observability pipeline using software bots to collect time-series performance metrics, events or logs, and application traces for traditional 3-tier or client-server application stacks running on-premise and hybrid environments, leveraging integration bots for packaged applications, ITOM tools, and open instrumentation.
Build Custom Observability Reports
Customize and schedule generation of observability reports including Availability & Uptime, Performance Report, Alerts, and Incidents Report, Assets Inventory Report, Asset Configuration Report Server Events Report, Network Events Report, Security Events Report and more.
Unified Open Observability for Edge Workloads/IoT Devices
Enable complete open observability for edge workloads and IoT devices using remote data collection modules, streaming data bots, message delivery, log serving, and more.
Deliver Observability Data to Messaging Systems, Analytics Tools or Data lakes
Build pipelines to deliver metrics, events, logs, or traces to popular messaging systems (ex: Kafka, NATS, AWS SQS/SNS …), analytics tools (Apache Spark), or data lakes (Ex: Splunk or Elasticsearch/Kibana)
Automate Observability tooling and integrate with DevOps process
Automate configuring the virtual and container instances for continuous observability of OS and application/service level metrics, logs and traces as per the policy. This can be integrated with the DevOps process so every infra/application instances that gets deployed in QA/Staging/Production will be completely observable
Dynamic alert condition detection
Traditional approaches rely on DevOps/SRE engineers to configure rules to generate alerts. However, in dynamic environments this is not scalable and opens up the risk of missing alerts. By using this solution pack , key metrics, logs and traces will be continuously analyzed for understanding patterns and detecting anomalies. These anomalies can then be notified as alerts
Predict Trends & Anomalies for a given time-series data
Collect , analyze and predict the trends and anomalies on any time series data collected from the underlying observability tools. New KPIs can also be defined that will be computed from the underlying metrics. The solution also supports converting log data into time-series data extending the prediction capabilities to log/unstructured data.