Take a Tour

Tips with Diana

We love data! But, we also know that collecting, analyzing, and reporting with data can be daunting. The person we turn to when we have questions is Diana Aleman - our Editor Extraordinaire for SAGE Stats and U.S. Political Stats. And now Diana is bringing her trials, tribulations, and expertise with data to you in a brand new monthly blog, Tips with Diana. Stay tuned for Diana's experiences, tips, and tricks with finding, analyzing and visualizing data.



  October 30, 2017

The American Community Survey: U.S. demographic characteristics at your fingertips

The Census Bureau is anyone’s go-to source when it comes to national and local U.S. statistics. It has anything and everything you can think of about the U.S. population. What U.S. county has the greatest number of people claiming Nepali ancestry? What is the average mortgage payment in my zip code? What is the average travel time to work in Wyoming?

You get the idea. The Census Bureau has a lot to offer, but there is one dataset in particular that is likely to provide answers to many of the different socioeconomic questions you have: the American Community Survey (ACS).

The breakdown

The ACS is an annual survey program that collects and provides key indicators about the American public. It covers a multitude of topics such as employment, housing costs, health insurance coverage, and so on. Think of it as the annual decennial census – only instead of collecting basic information like race and sex, the ACS collects more detailed characteristics like average rent paid, educational attainment, and much more. ACS statistics are released in batches beginning typically in the fall following the year of reference. These batches are divided into the 1-year, 1-year supplemental, and 5-year estimates. But what do these mean exactly?

ACS Estimates Definition How to use it
1-year estimates Data collected over a 12 month period, e.g. January 1, 2016-December 31, 2016. Best used when analyzing areas with populations of 65,000 or more and when currency is more important than precision.
1-year supplemental estimates Data collected over a 12 month period. Best used when analyzing areas with populations of 20,000 or more and when smaller geographies are not available in the regular 1-year estimates release
5-year estimates Data collected over a 60 month period, e.g. January 1, 2012-December 31, 2016. Best used when you’re more concerned with precision than currency and when analyzing any size population. These are the best estimates to use when analyzing small population areas.

For a complete breakdown, check out the Census Bureau.

So what's the best way to browse all the ACS statistics?

The best entry point to find the ACS statistics you want is American FactFinder, a warehouse of statistical information from surveys implemented by the Census Bureau (including the ACS). It provides multiple ways to get the information you need – from a simple location search to a mass download option. Be sure to select the American Community Survey as the specific Census program you would like to view. As I mentioned, the ACS collects information on a diverse number of topics compared to other Census surveys, which typically focus on one topic like the American Housing Survey. The ACS therefore gives you more bang for your buck! However, take note: ACS estimates are based on a smaller sample of the U.S. population than the traditional decennial census. Therefore its estimates carry a higher margin of error or are less accurate, but are timelier than the decennial census. For more information, visit the Census Bureau.




  October 10, 2017

Tips on Data Viz

The basics

For most of us, the fun bit of working with data and statistics is the visualization aspect. Many of us are visual learners, or at least understand information best when it’s simplified into a picture. This is especially true when dealing with data and statistics, which hide the meaning and significance of the information it carries behind numbers and variables. No one looks at the data set below and immediately thinks, “Got it, the construction industry’s contribution to U.S. GDP is rising again after the Great Recession. Easy.”

Just looking at a data set is not enough (to the inexperienced eye) to identify patterns in the numbers – we need a representation to more quickly and easily communicate the meaning and significance to our readers. For instance, the construction data in chart form relays the same information, but in a much more understandable way.

Thanks to Excel, almost everyone who has used a computer in school or for business in the past 15 years can create a simple chart like a bar or line graph. (And if you haven’t, the online resources available are infinite.)

However, even if you are a seasoned Excel user or chart builder, below are some practices you should keep in mind as you work on your next visualization.

The breakdown: Top Tips on Creating Data Visualizations

  1. Ask yourself, “What do I want to show in my chart?” Do you want to show a comparison, trend over time, or relationship among data sets? All of these are great options, but one chart type will usually represent the information better than others. Here’s a quick guide compiled by Dr. A. Abela to help you narrow your options to the best choice.
  2. Minimize the number of variables in your chart to avoid confusion. What does the reader need to know to understand the significance of the information? Limit the chart to those items.
  3. Keep the focus on the data, not the visual. It’s tempting to go all out and create a complex visualization using graphic design, but sometimes simplicity is best because you avoid distorting the information or leaving it open to misinterpretation.
  4. Provide context. Don’t assume the reader will immediately understand the information the chart is trying to impart – provide a brief title or subtitle as needed. Be sure to indicate what is being measured (people? U.S dollars? Squirrels?) in the title or in any labels.
  5. Provide a detailed citation. “Census Bureau” is not going to cut it! Big sources like the Census Bureau release thousands of data sets on an annual basis – help the reader who wants to find and use the chart’s underlying data by providing the name of the report or data set and a direct URL.

For more information on how to create the best data visualization, check out Data Visualisation, A Handbook for Data Driven Design by Andy Kirk!



  August 14, 2017

The Data Analysis Process

#3 – Summarizing your data

The basics

So – we have found the data and we have cleaned the data. Great! But, now what do we do with it? The third and final stage of the data analysis process really gets to what you needed to begin with – information and supporting evidence.

Context: Read this CQ Researcher report to learn about energy development and its possible expansion into Native American territories!

As I mentioned in my first post, raw data oftentimes does not make sense at face value or it at least does not provide enough context for a person to understand its significance. This requires the user to “summarize” that micro-information into straightforward intelligence. “Summarizing” data into statistics is much less about creating new information than it is translating and contextualizing the data into meaningful information for everyone.

In the midst of the 2016 campaign debates about climate change, my editor came across statistics on U.S. electrical generation by state and recommended it as a great addition to SAGE Stats. I agreed and found the original data on the Energy Information Administration’s (EIA) website. The EIA regularly releases data on electricity generation by source across the U.S; however, you’ll see below that electricity generation is measured in megawatt hours.

Not many people understand what a “megawatt hour” is – I certainly didn’t, I had to Google it! Is 1,000 megawatt hours a lot? Is it too little? How about one million?

Although I had no idea what a megawatt hour was, I understood it was measuring energy production across the U.S, which is valuable information for assessing which states are moving away from traditional energy sources like coal. But how could I translate megawatt hours into a statistic that everyone could understand? This required me to calculate statistics from the EIA data to neatly “summarize” the information it provided.

The breakdown: Summarizing data, an example.

In this scenario, I was specifically interested in electricity generation by source type and by U.S. state. The EIA provides this information as well as overall total electricity generation values. When facing raw values such as these, ask yourself, “What can I compare these values to in order to better understand their significance?” This is a great question to ask because your audience will understand information much more easily if it’s compared to other information.[1] Tare a number of statistics you can calculate to answer this question:

  • Totals: summing values to get a big picture perspective is often handy.
  • Percent of totals: excellent for comparing segmented data against overall totals.
  • Amount change: a good option to compare how much values have changed.
  • Percent change: a good way to compare the size by which values have changed.
  • Averages: these include mean and median averages.

Based on the EIA’s data, I decided that comparing electricity generated by source type to the overall total electricity generated was much more meaningful. So I calculated percentages for each source type against the total number of megawatt hours. That way I could gauge how much of each state’s total electricity was generated by coal, wind, natural gas, and so on. The results were much easier to understand and particularly enlightening!

Embedded visualization from SAGE Stats product.

Once the data was mapped out, I saw that a large percentage of the Midwest’s electrical generation was due to wind energy – an interesting result considering that neighboring states have strongly adhered to coal and oil.







So I can just throw any raw data values together, right?

Yeah, that’s a big N-O. Use best judgement when you calculate statistics. Any of the statistics in the section above should be calculated with values of the same unit of measure. So don’t go adding dollar values and percentages together because that makes no sense. Likewise, be careful of any missing data values or incorrect data values that can throw your calculations off (although your analysis as reviewed in my first post of this series should help you become aware of those!). If in doubt, ask for help from a trusted resource such as your instructor, librarian, or colleague.

 

[1] Herzog, David. Data Literacy. Thousand Oaks: SAGE Publishing, 2016. Print. http://methods.sagepub.com/book/data-literacy


  July 28, 2017

The Three Stages of Data Analysis

#2 – Cleaning your data

The basics

The term “data cleaning,” the second stage of the data analysis process, is usually met with some confusion. I mentioned to a friend that the most recent SAGE Stats data update required a lot of cleaning, which was taking up a significant amount of time. She asked, “So what exactly is data cleaning?” An excellent question!

Data cleaning or “scrubbing” consists of taking disorganized, messy data and transforming it into a format that enables easier analysis and visualizations. Depending on your formatting or metadata requirements and how big the data file is, it can take days to clean a file into submission.

Since I began working on SAGE Stats, I’ve learned many Excel tricks that can be applied to any kind of data cleaning situation. To avoid information overload, I’ll stick to the tricks I’ve successfully used in the past two years.

The breakdown

Top 10 Tips on Cleaning Your Data

  1. Read the data documentation. This will tell you what each component of the data file represents and help you identify what data is most relevant to your research interests and what data you can avoid.
  2. Excel’s “Text-to-Columns” feature. Especially large data files are often stored in “csv” or “comma separated value” formats and can be imported into Excel using this handy feature.
  3. VLOOKUP formula. My holy grail of Excel formulas. Do you want to pull multiple values from a workbook into another workbook? VLOOKUP has your back.
  4. COUNTIF formula. Are you looking for duplicate values in a range or checking whether values in one workbook are present in another workbook? COUNTIF counts the number of times a value occurs in a range!
  5. LEFT and RIGHT formulas. These are very useful when you need to parse out specific characters from the beginning or end of a value. For instance if “092017” represents September 2017, but I only need the year, then I can use the RIGHT formula to collect the last four digits.
  6. TRIM formula. Frustrated by inexplicable extra spaces that follow the value you want? This formula “trims” those out for you.
  7. CONCATENATE formula = “&”. Concatenate is a fancy word for linking two values together – you can use the formula for this or insert an ampersand between the two cell references, e.g. =A1&B1.
  8. I don’t think Excel’s filters get enough credit. Are you looking for multiple misspellings of New York? The filters help you quickly identify and correct them.
  9. Nest your formulas. Find ways to combine formulas to reduce the number of steps you have to complete! For instance, do you need to look up values in Workbook 1 that are associated to a value’s last five characters in Workbook 2? Nest the RIGHT and VLOOKUP formulas to quickly get your answer.
  10. Work off a copy of the original data file. You don’t want to be in a situation where you have mistakenly deleted data values and then have to download the data file again. Keep the original version handy as a backup.

This is a lot of work. Why do I need to clean the data file at all?

Sometimes a data set is so simple that it requires no cleaning at all; however, that’s not usually the case. These days you will typically encounter a file with all data merged into one column, which you then have to unmerge or parse out by yourself. Then you find that you need to concatenate some values back together. And then you realize that some values occur multiple times and you want to find out how many times each one occurs in the file. All this when you only want a snippet of that information! Data cleaning is a necessary evil at times in order to get your data in shape for easier visualizations and more accurate information.

The best way to learn these tricks (and even more advanced tricks) is to dive in head first and try them out with a specific data set. In Excel’s case, doing is better than reading or listening. After all, no one starts out as an expert, and I am no exception! My tips above are suggestions and may not work with your specific needs, but they can be applied in almost every kind of data situation. If you use them often enough, then they practically become muscle memory.



  June 7, 2017

The Three Stages of Data Analysis

#1 – Evaluating raw data

The basics

Starting to analyze your data? Head to SAGE Research Method's Which Stats Test for more guidance!

A friend I haven’t seen in a while asked me what I do for a living, and I talked about SAGE Stats and the work that goes into maintaining and building the collection. Instead of his eyes glazing over (like most people’s would) he asked me, “Ok. Not to seem like an idiot, but what is data analysis? Like what does it cover?” If you’ve had similar thoughts, never fear! I think I can safely say I’ve received multiple variations of this question before. My typical answer: what doesn’t it cover?

Data analysis covers everything from reading the source methodology behind a data collection to creating a data visualization of the statistic you have extracted. All the steps in-between include deciphering variable descriptions, performing data quality checks, correcting spelling irregularities, reformatting the file layout to fit your needs, figuring out which statistic is best to describe the data, and figuring out the best formulas and methods to calculate the statistic you want. Phew. Still with me?

These steps and many others fall into three stages of the data analysis process: evaluate, clean, and summarize.

Let’s take some time with Stage 1: Evaluate. We’ll get into Stages 2 and 3 in upcoming posts. Ready? Here we go…

The breakdown: Evaluate

Evaluating a data file is kind of like an episode of House Hunters: you need to explore a data file for structural or other flaws that would be a deal breaker for you. How old is this house? Is the construction structurally sound? Is there a blue print that I can look at?

Similarly, when evaluating a raw data file you have collected, you should consider the following questions and tips:

  • Read through the data dictionary, codebook, or record layout, which should detail what each field represents. Try not to immediately start playing with the data until you know what you’re looking at. You wouldn’t start renovation in your new house without reading the blue prints, right? You gotta know if that wall is load-bearing!
  • What irregularities does the methodology documentation detail and how may it have affected the data? What are the methodology notes that I should make transparent to the reader?
  • Is the raw data complete? That is, are there missing values for any records? (Missing values in the raw data can distort your calculations.)
  • What outliers exist in the data set? Do they make sense in the context of the data? For instance, a house price of $1.8 million in a neighborhood where houses don’t exceed $200K is probably a red flag.
  • Spot check the raw data. If the data set provides totals, then sum the values and check that they match. If they don’t, then does the documentation explain why they may not add up to the totals?

When spot checking, it’s good to check a data point that you may be familiar with. E.g. for geographic data, checking the data for your home state and other states that you are more familiar with will enable you to spot something weird and off faster than if you check something random.






The Washington Post has compiled incident-level data on police shootings since 2015 with the help of crowdsourcing. This is an impressive feat, but as I evaluated the raw data they provide, I walked away with several questions:
  • Are missing values due to underreporting by police?
  • What are the original sources for each incident?
  • Do they distinguish between neighborhoods in cities or just use major cities?
Together, these questions helped me decide that the Post's data was not suitable for use in SAGE Stats quite yet.






So if the source is good, then the data must be good too. Right?

It’s a mistake to assume the data is authoritative or fine as is just because it’s a published government source or another source you consider just as reliable. Data reporting is susceptible to manipulation and simple mistakes despite the best efforts and intentions of the responsible organizations. Assume nothing and evaluate the data to ensure it checks out! The next stage of data analysis is how to clean raw data to fit your needs. Stay tuned for my next post, where I will review the most effective Excel tips and tricks I’ve learned to help you in your own work!



  May 1, 2017

Data and Statistics 101

The fundamental difference between data and statistics (because who knew!)

The basics

If you haven't seen David McCandless' TED Talk presentation you need to!

Before I started working on SAGE Stats, the idea of working with a large data set was quite intimidating. Shout out to the USDA’s Food Access Research Atlas! In the two years since, working regularly with our platform has really opened my eyes to how empowering and beautiful data is once you understand how to pull usable information from it.

My experience has also taught me how overwhelming and confusing data can be. What is a data set and how is it different than a time series? How can I tell if data content is reliable or not? What the heck is a data dictionary and why do I need it? Unless you are consistently elbows deep in data, it can be difficult knowing where to even start. So let’s begin with the very basics: what is the difference between data and statistics?

The two terms are often used interchangeably – even within the same breath. I have even caught myself using both terms in explaining SAGE Stats to team members and close friends without a second thought. Although it is easy to synonymize the two, they are in fact very different.

The breakdown

Data are collected and organized information typically provided in massive files with detailed records and a data dictionary to decode the variable information. The records in those data files do not communicate significant meaning to the naked eye, so time and analysis are needed to read through the data collection methodology, decipher variable information, and determine which variables are of interest to you.

You'll recognize data as those ugly massive files that instantly cause your hard drive to whine when you try to open them or cause the much feared wheel-of-death to appear.


Embedded visualization from SAGE Stats product.


Statistics are clear and understandable explanations or summaries of data based on analysis. Statistics are generally available in tables and represented graphically. For example, the median state unemployment rate in the U.S. was 4.0% in 2016. This is a statistic derived from analysis of sample data collected by the U.S. federal government.





The best way to think about it is that the statistic is the big picture, which is created by individual pixels, the data. (Insert Monet joke and Clueless reference here.)





So statistics are better than data, right?

Not necessarily. Whether you need data or statistics really depends on your research question. Data is needed when your research question addresses a new issue that hasn’t been explained or thoroughly explored yet – this requires a deep dive into data where you must analyze and derive meaningful knowledge that can answer your question.

A more straightforward research question, however, can be more quickly answered with statistics because the question has been asked before and so the analysis to answer that question has also already been done. For instance, a student who needs information on unemployment across the Rust Belt states can easily find an answer because that information is frequently processed by the federal government for its own assessment of the economic climate.

The difference between data and statistics lies in the analysis. Data needs to be analyzed to be understood, but a statistic can be understood right away. The next question is: how do I begin to analyze data to get the statistics I need? Stay tuned for my next blog post for tips on just that!





Diana Aleman is assistant editor on SAGE Stats and U.S. Political Stats, which simplify the statistical research process by providing ready-to-use statistics on the social sciences to students and faculty. She enjoys metadata challenges and wrangling raw data files into workable formats.