You can’t play 20 questions with an algorithm and win

Anthony D. Cate, Ph.D.

April 12, 2019

20 Questions

The problem

  • Humans often understand complex processes as models

  • Problem: the human mental model of an algorithm is often just a black box between input and output

    • No “moving parts” to model

Allen Newell’s framework

  • “You can’t play 20 questions with nature and win.”

    • Cognitive processes don’t work in isolation

    • Can’t understand them by asking binary questions via experiments

  • Courses of action to help “win”

    1. Make complete processing models
    2. Model a complex task
    3. Make one program for many tasks


  • i.e. Make your model account for more than its usual inputs and outputs

Breaking algorithms

Breaking creates context

  • Instead of scrutinizing an algorithm’s input-output relationships …

  • … present users with related information that is organized into groups whose dissimilarities are intuitive

    1. “Glue things onto the black box,” and show how the algorithm interacts with its context by dividing, e.g., users into groups

    2. “Break open the black box,” and show how components of the algorithm categorize inputs and outputs into groups

    3. “Break open the black box,” and generate different outputs in a human-intuitive modality (e.g. images)

  • Humans understand certain kinds of relationships intuitively

    • Similarity between groups of comparable items

    • Similarity relationships allow inference of structure

      • Such structure can be a surrogate for understanding the (non-intuitive) computational structure of the algorithm

Inceptionism

  • Embodies Newell’s “One program for many tasks”

  • A Deep Neural Network (DNN) trained to classify images is used to generate images

    • Draw different images to illustrate the function of different DNN layers
  • Humans interpret differences between input and output images intuitively to understand layers’ functions

Inceptionism as “One program for many tasks”