Anthony D. Cate, Ph.D.
April 12, 2019
Humans often understand complex processes as models
Problem: the human mental model of an algorithm is often just a black box between input and output
“You can’t play 20 questions with nature and win.”
Cognitive processes don’t work in isolation
Can’t understand them by asking binary questions via experiments
Courses of action to help “win”

Instead of scrutinizing an algorithm’s input-output relationships …
… present users with related information that is organized into groups whose dissimilarities are intuitive
“Glue things onto the black box,” and show how the algorithm interacts with its context by dividing, e.g., users into groups
“Break open the black box,” and show how components of the algorithm categorize inputs and outputs into groups
“Break open the black box,” and generate different outputs in a human-intuitive modality (e.g. images)
Humans understand certain kinds of relationships intuitively
Similarity between groups of comparable items
Similarity relationships allow inference of structure
Embodies Newell’s “One program for many tasks”
A Deep Neural Network (DNN) trained to classify images is used to generate images
Humans interpret differences between input and output images intuitively to understand layers’ functions
