Feedback tagging

Summary

This is an overview of how we analyzed paidleave.mass.gov feedback. The end result of these analyses were the following charts:

After we finished, we passed the results along to Bryan for possible presentation at an Executive Steering Committee meeting.

Method

To perform the analysis, I first took all PFML feedback between launch and the time of analysis (early February). I divided it into employers and claimants segments.

For each, I read through the associated feedback once quickly, developing a list of common themes I was seeing. Then, I read through again, tagging each item with 0 to 5 themes. I did my best to tag each item at least once, including using tags like compliment when users said only things like "Simple to use," or wrong portal when the feedback was clearly meant for the other segment (i.e. a comment about the employer process from claimant feedback). When users did not leave a comment, I did not tag the feedback.

One improvement I made between tagging claimaint feedback (which I did first) and employer feedback was to begin formulating tags as problem statements--or at least descriptions of the issue(s) users were describing. For example, in the first round of feedback, I included tags such as "unclear benefits," "salaried workers," and "login." However, these tags don't adequately illustrate what trouble users were having. In the second round, I wrote tags such as, "want to partially approve," "want an email confirmation," and "trouble logging in." Including a verb (or at at least an adjective that implies a value judgment), made for clearer results.

Lastly, it was important to keep the authoritative list of tags in a separate, easily accessible document or spreadsheet tab so that you can ensure you don't have multiple tags referring to the same theme, e.g. "trouble logging in" and "login trouble." This would cause the topic to appear less frequently in the final analysis, as its count is split across tags.

Data aggregation

Once all the feedback was tagged, I counted the occurrences of each tag. (This results in a much larger count of tags than the total count of feedback, since multiple tags could be associated with a single piece of feedback.)

After counting, I had a table comprising 2 columns: tags and count of occurrences. I was asked to visualize this as a dual-axis bar chart, where "count" is on one y axis and the running percent of total is on the other. The x axis was the list of tags.

It's likely that you'll have quite a few tags, and I would recommend taking just the top 15 or 20. You can see how much harder to read the employers visualization is above than the claimants.

Last updated