Solutions for Managing Telemetry Data

Explore top LinkedIn content from expert professionals.

  • View profile for Mike Cornell

    Helping companies do more with data.

    6,254 followers

    👀 Enterprises often look to Microsoft Fabric to serve telemetry data to analysts through tools like KQL databases and #PowerBI. For this data to be really valuable though, it needs to be joined with other IT/business datasets for additional context. Azure Databricks provides a great foundation for delivering joined/curated Silver/Gold IT/OT datasets to analyst tools in Fabric with just seconds of latency even for the largest, fastest moving telemetry data feeds. Key advantages include: 💪 Standardized Core: Automatic incremental ingestion, storage, and governance of all datasets (fast, slow, big, small) happen in a common foundation eliminating silo'd datasets and tooling 💪 Enhanced Accessibility: Data becomes available more quickly across all data and AI workloads, including DW, AI, KQL, BI, streaming, etc. 💪 Cost Efficiency: Reduce costs by only storing necessary, curated, valuable data (vs. landing all telemetry data) in more expensive "hot/cached" storage 💪 Scalability: This pattern scales! supporting the growth of large enterprises and their analytical needs.

  • View profile for Ozan Unlu

    Founder & CEO - Edge Delta

    16,165 followers

    OpenTelemetry is becoming the standard for collecting logs, metrics, and traces. Yet, Day 1 is only half the battle. Day 2 is a big challenge we see at the terabyte and petabyte scale: managing it all without drowning in operational overhead. Many teams start with DIY OpenTelemetry, deploying and managing agents themselves. But as environments and stakes grow, so does the complexity. Tuning, scaling infrastructure, securing data, and ensuring compliance quickly becomes overwhelming. This is where OpenTelemetry as a Service is extremely valuable. Teams can have all the control and flexibility to focus on making data actionable: filtering, transforming, and routing telemetry before it hits premium downstream destinations. This freedom to control and optimize data pipelines is what makes the difference between reactive firefighting and a proactive, scalable strategy. We’re entering a world where observability and security teams must collaborate over shared telemetry, and OpenTelemetry as a service is a key enabler of that shift. If you’ve gone down the OTel path for #observability (and the same concepts apply to #cybersecurity with OCSF), how are you thinking about sustainably managing it at scale?

  • View profile for Chandresh Desai

    I help Transformation Directors at global enterprises reduce cloud & technology costs by 30%+ through FinOps, Cloud Architecture, and AI-led optimization | Cloud & Application Architect | DevOps | FinOps | AWS | Azure

    125,655 followers

    How to manage IoT time series data? IoT devices generate a massive amount of data, and it can be challenging to store and manage it all. Amazon Timestream is a purpose-built, managed time series database that can help you handle this challenge. Timestream is designed to store and query time series data efficiently, and it offers a number of features that make it ideal for IoT applications. 🌎What is IoT time series data? Imagine you're running a marathon. As you run, your smartwatch tracks your heart rate, pace, and distance. This data is like your IoT time series data. It's a stream of data that's generated over time, and it can tell you a lot about how you're performing. Why use Timestream for IoT time series data? Amazon Timestream is like a personal trainer for your IoT data. It can help you store and manage your data, so you can analyze it and use it to improve your performance. For example, you could use Timestream to track how your heart rate changes over time, and use this information to identify areas where you can improve your training. Key Features:👇 💥Scalability: Timestream can scale to handle even the most demanding IoT workloads. It can store and query trillions of data points per second. 💥Performance: Timestream is designed for high performance. It can query data in milliseconds, even for complex queries. 💥Cost-effectiveness: Timestream is cost-effective. You only pay for the data that you store and query. 💥Ease of use: Timestream is easy to use. It is a fully managed service, so you don't have to worry about managing infrastructure. How to use Timestream for IoT time series data? To use Timestream for IoT time series data, you will need to first create a Timestream database and table. Once you have created a table, you can start ingesting data into it. Once you have ingested data into Timestream, you can start querying it. Timestream supports a variety of query types, including: ✔️Time window queries: These queries allow you to query data for a specific time period. ✔️Aggregation queries: These queries allow you to aggregate data over time. ✔️Analytics queries: These queries allow you to perform more complex analytics on your data. Here are some examples of how Timestream can be used for IoT time series data: 💥Industrial IoT: Track and monitor industrial equipment, such as factory machines and power plants. 💥Smart cities: Track and manage smart city infrastructure, such as traffic lights and air quality sensors. 💥Connected healthcare devices: Track and monitor patient data, such as vital signs and blood sugar levels. #cloudcomputing #data #iot

  • View profile for Rob Tiffany⚡️

    Research Director @ IDC

    31,811 followers

    Just a friendly reminder that you can derive significant value from your IoT data without without using a nuclear powered AI data center. Pattern matching the incoming telemetry sensor values with predefined KPI ranges often derives sufficient insights to drive the automation you’re looking for. A quick look at my Greenhouse AgTech platform pictured below illustrates sensor names, data types, and units of measurement along with green, yellow, and red key performance indicator value ranges. Using soil moisture as an example, the system compares actual sensor readings with the predefined KPIs. Values falling in the green range means the soil has the right amount of moisture. Values falling in the yellow range tell you that the soil is trending towards dryness. Values within the red range denotes dry soil. Since the Greenhouse platform has been integrated with the farm’s irrigation system, a red KPI pattern match will automate the irrigation of crops in the particular block where the sensor readings came from. Since we’re interested in facilitating precision agriculture, the irrigation will stop as soon as we’re back in the green zone. Rather than delivering lots of cool dashboards to stare at, I’m a proponent of using a “headless” invisible IoT platform with automated decision making to facilitate a “lights out” farm or factory. No AI required.

Explore categories