Skip to main content Skip to Footer

June 03, 2014
Real-time Video Data Capture and Fusion with Telemetry in UAVs
By: Robert Fenney

UAVs, widely used to mount cameras for aerial photography, can be considered in a broader view as a form of a mobile sensor platform. We can put just about any sensor or group of sensors on the UAVs and fly them. The only limits to the sensors are size, mass and energy utilization. The bigger the mass and/or the more energy required to power the sensors, the shorter the flying time will be for the same size power source.

Now that flying sensor platforms exist, what do we do with them? Currently, we observe and measure “stuff.” In the future, we will be able to physically perform tasks. The stuff is the subject of any number of use cases, but they all correspond to requirements to complete a task better or more safely than it is done today, or to do something that is not or cannot be done with current practices. With UAVs, for example, companies can monitor remote pipelines for leaks or gauge crops irrigation levels without human intervention.

When we perform tasks, we use our five senses to measure what we are doing, as well as what is going on around us. To draw an analogy: To boil an egg, I put water in a pot, place the pot on the stove and turn up the heat. When the water boils, I place the egg in and set a timer. When the timer goes off, I turn off the heat and take the egg out of the pot. The events indicate when transitions are occurring during the process of performing the task. Being able to recognize when an event (or transition point) happens, or has happened, provides the ability to react in a meaningful way.

Similarly, with UAVs we have sensor platforms that we use to observe, measure and collect data. Just like in the egg example, our observations and measurements need to have a context to be useful. One way to make data useful is to assign “metadata” to the observations and measurements as they are being made. Metadata is data about data. An example of metadata is the time, location and instrument used in making the reading. The addition of the context to the data provides the ability to build complex models.

Let’s back up for a second and look at the difference between a measurement and an observation. A measurement is a discreet reading or sample that is relatively simple to make. In the case of the boiled egg, it would be the temperature of the water or a unit of time.

A real-world observation can be captured by a stream of information, such as a video feed, but it can also be any stream-based information that is packaged as a collection of frames. Observations are harder to make than measurements, but can be richer in content. However, if we want to do anything more than look at the observation, we need to have a way of interpreting the observation and extracting information. Again, in the case of the egg, we need to recognize that the water is boiling.

So how do we tie together data comprised of measurements and observations? By using the metadata and a common point of reference. In our case, the two standard points of reference are time and location. So every measurement or observation is tagged with the time and location from which it was recorded.

In addition, we also need communication channels—or telemetry—to capture the information in real time as it is being published by the UAVs. These telemetry channels support both discrete measured data, as well as streaming observation data. The telemetry channels can also be used to send instructions back to the UAVs while in flight.

Now we can overlay the data with the time and location metadata in a process called data fusion, then combine it into models for further processing. If we process models received in real-time from the UAVs, we can generate events to affect the UAVs as they are working. We can also take the higher resolution recorded data from the UAVs and upload it into the models after the UAVs have finished their tasks. With the higher resolution data, the models are richer and have more detail.

This brings us to the last problem: How to scale to handle large numbers of UAVs with richer models that evolve over time? The answer is an elastic cloud-processing layer, which can scale in real-time depending on usage requirements and complexity of our models. The events are published into an event stream that is used by the subscribing applications.

The end result? We now have the ability to fly UAVs, make measurements and observations, and push the measurements and observations via telemetry channels into a fusion layer. The fusion layer adds metadata that aligns the information contextually, and then publishes the contextually aligned data in the cloud for use by an intelligent application that can extract meaning from the model built with the contextually aligned information. This intelligent application can now identify events from its model and publish events to any and all interested applications.

Now that you know more about how UAVs process data, what applications and associated use cases does your company have?

Popular Tags

    More blogs on this topic

      Archive