In general, when somebody asks to download their TRC data as a CSV, it’s often the wrong thing. Once you’ve exported to a CSV, you lose out on the benefits of TRC, including our mobile canvassing apps. It also causes problems like a) how do you keep the downloaded CSV in current with changes; b) how do ensure data collected from that CSV gets uploaded back to your account? And usually there’s a better way to accomplish their scenario…
This article describes a best-practice for campaigns in using data for targeting voters and how you can achieve that with TRC.
The flow here is to start with getting initial public data from the county auditor, merge in microtargeting information, choose the targets, canvass, and iterate on the model.
Conceptually, we can think of data like a giant spreadsheet (CSV file). Each row is a voter, and columns are information about that voter. We’ll walk through the different phases with a small sample of 8 records, but TRC can help you do this with your entire district of 80,000 records
“Blame” is a free reporting plugin for TRC to help you analyze canvassing results. Blame provides easy pivots (“business intelligence”) on the data collected by your canvassers. This helps you answer key questions that lead to action:
- How many doors did your team hit per day?
- Who exactly did you contact and what was the result? Pull the details into excel for further analysis.
- How many supporters did you identify?
- What was the specific result for each canvasser? Which volunteers should be rewarded and which need more coaching?
- Are there suspicious trends in the data?
(The name Blame comes from similar tools in the software industry that developers use to find who edited a file and introduced a bug)
You can launch Blame from the plugin menu:
Or by appending &plugin=Blame2 at the end of your login link.
TRC tracks each individual edit supplied by a user. An edit includes not only the actual change to the sheet (“voter #5472 is a supporter”), but also timestamps, geo location, user id, and even which plugin made the edit.
There’s a timeline chart showing you edits per day. You can use the slider bar at the bottom to zoom in on a range, such as a super Saturday.
Blame provides pivots. For example, you can see number of supporters identified and by whom.
Blame also presents a “grid view” of all the individual edits. This provides a convenient way to see just the values that have changed. You can view and download all the edits in a single spreadsheet:
If a single record has multiple changes, blame will flag it and let you drill into more detail and see the exact history.
This can be useful to identify records changed by multiple people.
TRC is a canvassing tool that can pull data from a variety of different sources. For example:
|Voter names, age, addresses||Secretary of State VRDB||Perfect – the SOS is the source of truth.|
|Map view||Geocoding address to get a Latitude and Longitude||High – we try to get the pin right on the house.|
|GOTV – did you mail in your ballot?||County auditor||High – but there can be a lag between when the ballot is mailed and when the auditor reports it.|
|Voter history||Secretary of State historical files||Perfect|
|Past Precinct Results||Secretary of State||Perfect – although precinct boundaries and populations change over time.|
What about Party Id?
Party Id is determining which party a voter is aligned with. Democrat? Republican? Libertarian? Other? A common convention is assigning a “party id score” that’s a scale of 1 (hard gop) … 5 (hard democrat). 3 is independent, 0 means unknown. This is crude and deeply flawed (how do you represent people that split their tickets?), but it’s still widely used be campaigns.
While most of the data has an official source, there’s no definitive list of party identification. So organizations that provide party id must make an educated guess based on the data they do know – such as if you voted in the Democrat presidential primary or if your PDC donations show strong contributions to Republicans. If new data comes in, we update the guess. This gets awkward when if the first guess was right and then the 2nd guess is wrong.
Pinned vs Floating
TRC helps you cope with this uncertainty. Most tools treat the values as static numbers. This unfortunately means you don’t know the source or confidence of a value. TRC them as “Pinned” and “Floating”.
1. Once you change a value, it is “pinned” and that changed value should never get overwritten by somebody else. (You can see the full audit history here in the History tabs or in the Blame plugin.)
2. But before you change it in TRC, the value is “floating” and can be updated underneath you when we rebuild the models.
TRC addresses this by letting you “pin” values, and by giving each user their own “sandbox” that lets them track their own specific values.
We mark any Floating values with a “?” after the party id. This lets users know that it’s a guess and may change underneath you.
So for example, M Dunwiddie starts with:
The ‘?’ means that the data can change. The 5 means our guess is hard democrat. But what if we then see that M Dunwiddie voted in a Republican primary and donated $100 to a Republican candidate? The data team could pick up that data and update the model to a ‘1’. But even then, new data could flip it back to a ‘5’ (such as if the data team later found she donated $10,000 to a democrat).
But regardless of what the data team does with floating values, say I then go in and explicitly change her to a 1. The cell goes green, and the question mark is now removed.
And when I refresh the browser, the green highlights reset but the question mark stays removed. The lack of question mark tells me the value is now “pinned” and won’t change. This only applies to the Party column.
1. Once the cell is green, it’s saved to the server. This means If the cell does not turn green, it hasn’t been saved.
2. If the party column has a “?” next to it; the value may change on you. This means if it has the correct value, but has a ‘?’ next to it, then go in and deliberately change it and make it turn green. That will pin the correct value.
Washington State is a vote-by-mail state and voters have about 3 weeks before the election to mail in their ballots. Voter-Science tracks the ballots that are received and provides several tools to aide in your Get-Out-The-Vote (GOTV) efforts.
1) The GOTV Reports
Voter-Science provides GOTV reports – see the Turnout Report plugin. Note – your account must be enabled for Ballot chase in order for this to plugin to work.
This report includes useful information like:
- voter turnout statistics
- breakdowns by party and targets
- breakdown by result of canvassing
- identified supporters that haven’t yet voted
- pre-precinct breakdowns
- and even heat maps of turnout:
2) Names are crossed off in the List View
For example, in the screen shot below, Nancy and Marvin have already voted and so their names have been automatically crossed off.
This is critical for get-out-the-vote: if somebody has already cast their ballot, no need to contact them further for gotv.
3) Mobile app tells you the ballot received
The mobile apps will tell you the ballot is received
4) Usage with Filters
Ballots are tracked by creating a new “XVoted” column in your sheet. It’s a ‘1’ if the ballot has been received. You can also use the Filter tool to filter on XVoted just like any other column and use that to create custom heat maps (Supporters that haven’t voted) or specific child sheets.
For map users, a common “Targeted voters” filter is “IsFalse(XVoted) && IsTrue(XTargetPri)”. This means “only include people whose ballot is not yet received and who are on the targeted list”.
There is some delay between when a person puts their ballot into the mail, it’s received by the county auditor, and the auditor reports having received it. This is tracked per-county, and counties report at different speeds. This means that if you see a name crossed off, you can be confident the ballot was received.