How Working With Live Data Changed My Approach to Analytic

Analytical Thinking
Author

Heidi Mill

Published

February 5, 2026

Working on projects for class is usually very straightforward. The end goal, scope, and even the data itself are clearly defined from the start. You’re typically given a dataset along with specific instructions—use these columns, apply these methods, and answer these questions. Expectations are clear, the professor guides you through the process, and the timeline is short, often just a week or two, with relatively low expectations for hours spent.

Working with data in a professional setting is very different. Real-world data work is far more ambiguous, open-ended, and unclear in both scope and delivery timelines. In my job, the data I work with comes from a live connection to the source. This means I constantly have to consider how often the data changes and how those changes affect how I clean, transform, and analyze it.

Some of the data I work with is mostly static. When it updates, it simply adds a new row of information, allowing me to look back at previous values and easily perform week-over-week comparisons. However, other datasets can be completely overwritten depending on the circumstances. This means the data I see on Monday may not be the same data I see on Friday, with no built-in way to view what the data looked like earlier in the week.

Despite this limitation, the management team I report to still expects week-over-week comparisons and, by the end of the year, a clear picture of how the data changed over time. This creates a challenge: if I waited until December 31 to generate a yearly report, I wouldn’t be able to show trends between busy and slow seasons because past versions of the data would no longer exist. Since this historical perspective is essential for decision-making, I had to think carefully about how to “freeze” or archive the data without compromising its integrity.

Another important consideration is ensuring that I am querying the live data source correctly. Because dashboards and visuals automatically update when the data refreshes, I need to verify that they are displaying the correct statistics and time periods. A small mistake in a query can lead to misleading visuals, which can quickly become a larger issue when shared with management.

The key takeaway is that working with static, never-changing datasets is very different from working with constantly changing data while still needing accurate time-based comparisons. To handle this, I’ve created weekly routines in both Power BI and Excel. In Excel, I use macros to ensure that the same steps are followed every time. One of these steps involves what I call “freezing” the data—copying the current values and pasting them into a new sheet, or saving end-of-week data in a designated comparison area. I save each weekly report with the current date, while maintaining a separate working file for daily updates and refreshes.

Power BI works a bit differently. I export specific tables into designated files, which are then consolidated to support week-over-week comparisons for live data. Each row of data is tagged with the date it first existed, ensuring that comparisons always reflect the most recent and accurate information.

Through this process, I can confidently answer questions about the current data landscape while also preserving historical data. This allows me to show management exactly what the data looked like at any given point in time—and know precisely where to find it.