Skip to main content

Tech & Check Notes

Published onMar 29, 2019
Tech & Check Notes
·

These are partial notes; please add yours! Other notes + discussions:

Day 1

Intro

Bill welcomes everyone, individual intros.
Thanks to Tech & Check Funders: Knight, FB journalism, Craig Newmark, Google (ClaimReview + Share the Facts)

Duke Tech & Check: Automated journalism in action

  • Pop-up fact-checking for TV and web

  • Deploying ClaimBuster to help fact-checkers prioritize claims to check (Tech + Check alerts)

  • Co-op: community of scientists/journos coming together to share knowledge

Projects:

  • FactStream app + live fact-checking during e.g. debates and speeches

  • Tech & check alerts tips to fact-checkers about what claims they might check. Scrapes twitter to send alerts to fact-checkers once a day

  • Squash new pop-up product where fact-checks are instantly overlaid on political speeches and debates

  • Talking-point tracker tracks words and phrases and helps fact-checkers and political journalists check what trends are emerging

  • Truth-goggles started when Dan was at media lab; pop-up fact checking reborn as a product to help fact-checkers identify words and phrases that might be more or less appealing to different audiences

Demos (details in their own doc)

Dan + Jason: Truth goggles, talking-point tracker

Arjun Moorthy: OwlFactor ([email protected])
Demo: owlfactor.com/news Password: dukedemo

Nadine Ajaka: Taxonomy for video fact-checks
([email protected])

1. Context: misrepresentation, snippets
2. Editing: omission, spliced
3. Transform: doctored, synthetic media

Mevan Babakar: FullFact features + Alpha

  1. Media monitoring

  2. Pulling in claims

  3. Claim detection

  4. Claim matching

  5. Robochecking

Caio Almeida Meedan + WhatsApp

Bill Adair — Realtime video/transcript fact checking + overlays
Viewer tests confirm people are eager to see this rather than raw broadcasts

Delip Rao -- AI foundation

detection of Forged/Synthetic content: Visual, Audio, and Text.
FaceForensics, forensic transfer. Detecting generated audio.
As of 2019, replay attacks on audio no longer considered an attack vector.

Zoher Kachwala — FactCheckGraph

Ex: using FRED, which runs on Neo4j, to refine graph representations of claims and entities.

Day 2

Mark — voice to text. Still hard. Text is 75% correct; fuzzy matching is crucial.

Dan S — callback to HyperAudio.
We need more organized/structured gathering of captions for all video.

Mevan — Captioning: someone listens to audio, and speaks into a mic in a specific way. Like court transcription. People do it 20m at a time, tiring!

%% — We can’t wait for most news orgs to produce transcripts the way CNN does, we need to extract our own captions. (else it’s slow, incomplete, and has (c) issues) Roger M: we’ll help!

Jun — for Squash, no single service provides everything we need. Much comes down to whether we get a specific fxn. (does it annotate each word w/ a timestamp? does it add punctuation? does it separate speakers?) Then if we deal with G, M, and A, you have to sync different clouds.

1. If we can make a wishlist for a transcription service, that would hlep us and others providing those services.
2. Some people in S2T think it is ‘solved’ in that it is better than human recognition in some ways. But there are accents + other biases…

SJ: Is there a canonicalization service for recordings / transcripts?
A gatalog of events, and for each a list of [sentences] in a talk, which any number of recordings / captions / resolved entities can link back to, so that many analysts can find what others are saying about the same snippet.
Roger M: UTC for timestamps, when you can figure it, is a global ident.
Jun: often you have youtube videos with no UTC but can estimate time from start of recording.

Dan S: Glorious Contextubot helps you find all the times a clip appears in different places.

Working Groups

Claim Buster 2.0

Claim Review future arc

Opportunities: Link to tv/video. Normalize ratings.

New fields: context/explanation. Type of misinfo. Type of claim. Complexity of claim. Topics. Underlying fact(s).

Uses: Identify publishers/authors of false content — how it is spreading
Browser alerts. Media literacy product to track claims. Search add-on that also shows what’s true, not just a check-window.

Crowdsourcing models and options

Examples:
Wikipedia current events (top breaking news only; decentralized; effective tertiary review),
WikiTribune (some centralization, crowd contribution, can be effective. Wants to be secondary source, not just a tertiary one)
Truth Squad at Politifact: can directly check, mostly donate to support professional checkers. Supported on FB among other spaces.
Climate Feedback + Health Feedback: 400 experts. How fasts is turnaround? How scalable is this model?
[noted later] Wiki Med: review of health articles.

Ongoing challenges + efforts

Claim Buster ideas and lessons

Tool presentation, Changkai Li.

  • This was a first attempt at a complete real-time claim-to-check lifecycle.

  • Raw media - claim identification - analysis - annotation - [reputation?]

  • (should link to slides: detailed, w. many individual components worth discussing / sharing / enhancing w/multiple inputs)

Claim matching approaches

Chris Guess

  • Currently using: elasticsearch similarity search. Can be used as a pre-filter to NN analysis.

  • Neural net (across 20k claims) was initially slow. 15 min to update (w/ 5 GPUs? to clarify)

  • This could work on any language. Esp in languages where word order is less important [Latin, 中文/汉语…]

  • 25,000 fact-checks in database. Better to have a training dataset of at least 500,000. (looks meaningfully at audience)

Aside: looking for sw collab

Entailment: Jun joins, in conversation on how this impacts matching

  • Semantic/linguistic nuance capturing implication, both positive and contradictory. a) all Y are Z, X is Y, vs X is Z; b) X/X’/X’’ is Y/Y’/Y’’; c) X did Y vs Y did X. [b: ‘the wealthiest/richest/top 100 companies/orgs have 50% of global wealth/assets/revenue/profit’]

  • Good to know that a snip is incomplete (grammatically, or contextually)

  • Good to know what axioms are likely / explicitly referenced + entailed.

  • Consider ‘partial entailments"

Current + new matching tools:

  • ? (Simon),

  • ? (Mevan),

  • BERT (supports fine-tuning w/ little labelled data),

Questions:

  1. How do we identify the family of relationships, from none to 100% synonymous? “Matching” needs to be textured. Where does human input lend clarity, is it just labelling results of a model as good/bad?

  2. Training data limitations: essential; availability and quality make all the difference. How do we get it, what are the limitations of what we have?

  3. Reputation/credibility: if we allow release of things w/ false positives, will that destroy faith in us?

  4. Explainability: why are we doing fact checking here, what are implications? Does the need for explainability wane with increased accuracy?

Challenges in Database Matching

Matt O’Boyle: Duke undergrad.

Goal: give “live” fact-checks during political speeches; requires 30-45s buffer

Pipeline: Google cloud speech API gives full transcription of live speech. Claimbuster API filters, given desired threshold for checking —> resulting snippets are looked up in databases like Politifact.

Problems:
1) Mistranslation / mistranscription (political debate poses specific challenges, e.g. applause, simultaneous speech)
2) Incorrect claim grouping
3) Missed Context
4) Matching algorithm mistakes

Closing remarks

Thanks to all; this community is a central outcome of the work.

Monthly calls; all invited to join or present (10- to 40-minute presentations).
For instance: more on pop-up video annotation in a future call.

And there is a related list for periodic messages. Please get in touch if not included.

Comments
0
comment
No comments here
Why not start the discussion?