Surface Observation Pipeline Research
Proposal · Pre-data collection

Study 02: Why Two Apps Can Show Different Temperatures for the Same Place

Proposed independent study Educational outcome: a plain-language guide to averaging in weather data

The Question Behind This Study

A common puzzle for weather-app users: open two apps side by side, both reading from official sensors at the same airport, and they sometimes show slightly different temperatures — say, 87°F and 88°F. Neither is wrong. They're using the same sensor, but processing its readings differently. This study aims to explain that processing in a way anyone can follow, and to measure how often these small differences actually appear in everyday weather information.

Background

A modern automated weather sensor at a major US city records temperature very frequently — about every ten seconds. Those rapid measurements are then summarized in two different ways before being reported: as 1-minute averages (a smoothing of the most recent minute of samples) and as 5-minute rolling averages (a smoothing of the most recent five minutes). The two summaries usually agree closely, but during rapidly changing conditions — a thunderstorm gust front, a sudden cold front, a sun-warmed surface affected by a passing cloud — the two windows can produce noticeably different "peak" values for the same hour.

The NOAA Phoenix office published an instructive example illustrating this exact effect: "the peak 1-minute temperature was 117°F, but the rolling 5-minute average peaked at 116°F" [1]. A 2005 study in the Journal of Atmospheric and Oceanic Technology by Sun and Baker found that automated and reference-quality sensors agreed within ±0.3°C in about 71% of hours, with average differences of about 0.05°C [2]. Those prior studies focused on differences between sensor types. The complementary question — how the same sensor's data looks different depending on which averaging window you ask for — is less commonly documented in publicly accessible writing.

Sampling granularity illustration A time-series chart showing temperature over an hour. Three curves: noisy 10-second raw samples, smoother 1-minute average, smoothest 5-minute rolling average. The peak of each curve occurs at slightly different times and at different temperatures. 90°F 85°F 80°F 14:00 14:20 14:40 15:00 10-sec peak: 89°F 1-min peak: 88°F 5-min peak: 87°F 10-sec raw samples 1-minute average 5-minute rolling average
Figure 1. Conceptual illustration of how the same observation interval produces different "peak" values depending on the averaging window used. Brief sensor excursions are visible in 10-second raw samples but smoothed by progressively wider averaging windows. Different downstream products report different peaks. Values shown are illustrative.

Research Questions

  1. For a given hour at a major US city, how often do the 1-minute and 5-minute summaries of temperature actually agree, and how often do they differ?
  2. When they differ, by how much? Is the typical difference imperceptible to a casual observer, or large enough to show up in the temperature people see on their phones?
  3. Are there patterns — by season, by time of day, by city — in when these differences are most pronounced?
  4. What weather conditions produce the largest gaps, and can those conditions be identified in plain language for a general audience?

Proposed Approach

For approximately 10–15 major US cities over a sustained period, observations will be collected in three forms: the hourly precise readings, the 6-hour summary remarks issued by sensors at standard intervals, and the smoothed 5-minute archive values. For each city-day, the three perspectives on temperature behavior over the same period will be compared, and any disagreements will be recorded. The findings will be aggregated into accessible written content with supporting visualizations showing how often, and under what conditions, these small differences appear.

Who Benefits From This Work

Anticipated Findings

Cross-References to Other Studies

References

  1. NWS Phoenix. High-Resolution KPHX ASOS Data. weather.gov/psr/HiResASOS
  2. Sun, B. and B. Baker (2005). A Comparative Study of ASOS and USCRN Temperature Measurements. J. Atmos. Oceanic Tech., 22(6). journals.ametsoc.org
  3. NOAA NWS. Automated Surface Observing System (ASOS) User's Guide. weather.gov/media/asos
  4. NOAA. Federal Meteorological Handbook No. 1: Surface Weather Observations and Reports.