Monday, September 17, 2012

Stanford HCI Class - Notes


Hi folks,
here is a dump of my notes based on the magnificent HCI class held at:
https://www.coursera.org/course/hci
These notes include a recommended reading list + the essential points of each lecture.


Reading list

  • representation of information:
    • Edward Tufte: Envisioning Information (and other data vis/design books)
    • Herbert Simon: Sciences of the Artificial
    • Don Norman: Things that Make Us Smart
    • Jakob Nielsen: www.useit.com/alertbox
    • User Interface Engineering: Designing for the Scent of Information
  • visual design:
    • Jenifer Tidwell: Designing Interfaces
    • Robin Williams: The Non-Designers Design Book
    • Kevin Mullet and Darrel Sano: Designing Visual Interfaces
    • Luke Wroblewski: Web Form Design
  • studies:
    • David Martin: Doing psychology experiments

Misc

  • challenge yourself to make up stories for objects
  • commitment escalation (ask for just a little at the start and then ask for some more rather than asking for everything from the start)

HCI class


Prototyping

  • Prototyping is a strategy for efficiently dealing with things that are hard to predict
  • Prototypes are not complete, they only prototype an aspect of the product like
    • What might it look like?
    • How might it work?
    • What might the experience be like?
  • MOTO Prototypes are questions, so ask a lot of them

Design evaluation

  • survey - show stuff to people, let them fill a questionnaire
  • focus group - put a bunch of people/early users in 1 group and brainstorm with them
  • feedback from experts
  • comparative experiments - A/B testing..
  • Issues to Consider:
    • Reliability/Precision, Generalizability, Realism, Comparison, Work Involved
  • MOTO What do you want to learn?

History of HCI

  • "Long nose of inovation"
    • general adotpion starts very slowly, but when certain key aspects are introduced that reduce the friction, the adoption is exponential
    • it might be a good idea to look for things that are in the "long nose" stage and find the finishing touch to make them adoptable

Participant observation

  • find existing need/problem, interview and observe people having the problem and try to build an empathy to look at it from their POV
  • deep hanging out - spend time with people, learn to do the work they have to do, their hacks/workarounds (e.g. when designing tools) -> live their life
  • there is a lot of stuff that we do that is so automatic that we can not articulate it
  • 5 key things:
    • what do people do now to go around/solve the problem?
    • what values and goals do people have?  (design a product that will embed itself into the everyday life of the user - even if they introduce new concepts/functionality)
    • how are these particular activities embedded in a larger ecosystem? (eg if we were to design a bus, we have to look at the ecosystem of travel and what are its usecases - like getting to work/friends house/or going shopping - and by understanding the goals and constraints of the ecosystem we can come up with bus ideas that we would not see otherwise.. by e.g. asking "what makes people select or not select a bus in particular situations")
    • what are similarities and differences across users? (eg in the bus example - disabled people, people who want to save money, people who want to get from A to B as fast as possible)
    • and other types of context, like the time of the day
  • MOTO Don't pay attention to what people say, but what people do

Participant interview

  • participants:
    • target users (representatives of major user groups)
    • current users of a similar system (if you are creating "better X")
    • non-users [to learn about the barriers & goals] (if you want to broaden a set of people who can do a certain task)
  •  use your social network to find people
  • reward with a token of appreciation - gift ticket, something related to the product
  • don't ask leading questions, use more open questions
    • leading question (bad) - "Is feature X important to you?"
    • open question (good) - "I see from the log that you have never used X, tell me why/more."
    • "What would you like in a tool?" - people don't know (faster horse) - ask people about their own lives/goals - they are experts in that
  • avoid:
    • What they would do/like/want in hypotetical situations.
    • How often they do things. (to fix it, make it more concrete - how many times have you done it this week)
    • How much they like things on an absolute scale. (what does 7 out of 10 mean?)
    • Avoid binary questions. ("Do you like grapefruit?", "Yes." - not too interesting)
  • good:
    • start with open ended questions and give a few seconds of silence to think
    • after those few seconds, you may hear a second story, which is often the more interesting one

Additional user study

  • diary studies/experience sampling (fill in a form in some event/interval)
    • if something happens over more time or is more sporadic
    • can scale better then observation
    • people write an entry at a specified time or interval
    • structured task (how happy you feel?, what have you eaten?)
    • paper notebook journals, camera/video recording/voice
    • tailor the method of input to the context - frictionless = better data
    • reminders so people don't forget
  • leader users - turn their individual hacks into more generalized form
    • LU inovate when no commercial solutions are available
  • extreme users - get 1k emails/day etc
    • learn how they use their tools and ecapsulate that in a product
    • but don't forget the actual average user

Personas

  • way of preserving/categorizing the knowledge learned in studies/interviews/observation
  • abstract/model users who represent user groups
  • concrete attributes of a persona:
    • demographics, motivations, beliefs, intention, behavior, goals
  • keep your focus, coherence (when thinking about a new feature, think about your personas and how/if they would use it)
  • represent with a photo, name, occupation, story - build empathy
  • persona empathy leads to new insights

Prototyping

  • story boarding
  • paper prototypes
  • digital mockups
  • static HTML
  • dynamic code
  • database

Prototyping: Storyboarding
  • allows focus on the task that the interface will support, NOT the interface itself
  • always has a person in them
  • communicates flow - shows key points in time
  • "star people" (head as circle + body as a 4 point star)
  • 3 key elements:
    • setting: people involved, environment, task being accomplished
    • sequence: what leads someone to use the product, what steps they do to accomplish the task
    • satisfaction: what motivates the people to use the system, what problem/need does it solve, what people accomplish
  • time limit the storyboard creation - 10mins/1 board
  • USER FEEDBACK: viability of scenarios

Prototyping: Paper prototyping

  • screen/view prototypes on pieces of paper
  • collage of different elements
  • let people try to use the prototype by role-playing as the prototyped "system", make people involved
  • popups = post-it notes
  • layers = transparencies
  • USER FEEDBACK: informal "Hey try this out, here are 3 alternative ways this can work"

Prototyping: Digital mockup

  • when you get more specific about pixels
  • USER FEEDBACK: structured critique (questionnaire), controlled experiments (test task performance, orientation, intuitiveness, ..)

Prototyping: Early interactive prototypes (prototypes of stuff you have not implemeneted yet)

  • make interactive apps without (much) code
  • simulate machine interaction with humans, behind a real-ish interface
    • using a paper interface motivates people to give feedback (they don't fear hurting your feelings)
  • simulate machine-learning, personalization
  • 5 key elements:
    • limited set of functionality/scenarios
    • put together the UI skeleton
    • develop hooks for the human operator to intervene
    • reherse with the operator, get out the easy "bugs" in UI/logic
    • don't overpromise/create technically unrealistic functions
  • 2 roles: narrator that organizes the interview with user, "wizard" that operates the prototype (preferably the wizard is hidden)
  • user feedback:
    • think alound as you are performing tasks, getting stuck etc
    • retrospective  (best when think aloud distracts, good to show video of the person working with the system, to remind stuff)
    • heuristic evaluation (measure specific task metrics)

Prototyping: Video prototype

  • like storyboard, but on video
  • explanation of design ideas
  • specification for design decisions, prioritization, making sure you are not adding features that are not necessary
  • can use the paper interface as a prop for the product (or even invisible interface)
  • MVP defined by what you put in the video
  • key steps:
    • come up with an outline (can use the existing storyboards)
    • use basic equipment (phone camera is fine)
    • get your friends to be actors
    • use the real location where your target users are
    • edit as little as possible

Compare designs

  • rapidly producing many alternative designs leads to better results compared to being focused on very few perfect designs
  • functional fixation: people often start with 1 core idea and then they will push forever to make it work, rather than trying more core ideas in parallel
  • parallel prototyping separates your ego from the artifact (product) of the creation, you don't consider the feedback personal if you have many alternative products

Direct manipulation

  • advantages of GUI
    • input on top of output
    • immediate feedback on actions
    • continuous representations of objects
    • leverage of metaphors (buttons/handles/...)
  • always 2 steps:
    • performing action (problem: how does the user know what to do)
    • evaluating the outcome (problem: how does he find out what happened)
  • problem identifying questions - How easily can someone:
    • Determine the function of the device?
    • Tell what actions are possible?
    • Determine mapping from intention to physical movement?
    • Perform a given action?
    • Tell what state the system is in? Is it in the desired state?
    • Create mapping between the system state and their interpretation?
  • problem reducing tips:
    • Visibility (add affordances, hints of possible action on active elements)
    • Feedback
    • Consistency with known standards
    • Non-destructive operations (allow undo, helps creativity through exploration)
    • Provide a systematic way to discover all the functions (e.g. through browsing a menu)
    • Reliability (no randomness, predictability)

Mental models

  • how the user sees the system in their head
  • the designer has his own model which is often different
  • model mismatch leads to slow performance, errors, frustration
  • how are MM created:
    • analogy to an older interface ("MS word is like a typewriter")
    • we have models of everything/everybody we interact with
    • lots of inconsistency, incompleteness and superstition
  • user errors:
    • slip: accidental mistake like misclick (solve by improving ergonomy, visual design)
    • mistake: user does what he intends to do, but his MM is wrong (solve by better feedback, make clear what the options are)

Heuristic evaluation of design

  • works with UI sketches or working UI
  • peer critique:
    • before user testing
    • before redesign
    • before release to polish rough edges
  •  key steps:
    • give out a set of usability principles (heuristics) to people (peer designers, stakeholders)
    • let them examine the UI considering the heuristics (no collaboration allowed)
    • aggregate the findings (they can communicate afterwards)
  • evaluators process:
    • pre-eval training: (you) give them domain knowledge, info on scenarios
    • step through design several times, at least 2 (examine details, flow, architecture, consult the usability principles)
    • prioritize problems/violations found, be as specific as possible
    • the severity measure combines frequency, impact, persistence:
      • 0 not a usability problem
      • 1 cosmetic problem
      • 2 minor usability problem
      • 3 major usability problem; important to fix
      • 4  usability catastrophe; imperative to fix
      • example severity rating
        [Issue: Unable to edit one's weight in a sport tracking mobile app.
        Severity: 2, Heuristics violated: User control and freedom,
        Description: When you open the application and enter a weight, there is no option to change it afterwards.]
    • (you) aggregate the violations of the heuristics and discuss the items and possible fixes with participants and your design team
  • use 3-5 evaluators
  • faster than user testing, more readily actionable results - but sometimes less precise (experts can consider something being a problem although users would not)
  • standard set of heuristics to consider:

1) Show system status (system state, user workflow/timeline state)
    • time/process progress
      • progress bar when action takes >>1s or multiple steps
      • ~1s  show that activity is under way
      • <1s just="just" li="li" outcome="outcome" show="show">
    • space (eg gmail shows the % of quota used)
    • change (eg "you changed a document, do you want to save it?")
    • action (eg crosroad lights, redundancy is good, red and at the top = stop, green at the bottom = go)
    • next steps (let the user know what is going to happen next)
    • completion (notify when "done")

2) Familiar metaphors and language
    • metaphors (e.g. real world methaphors in GUI)
    • familiar language (for the target users)
    • familiar categories
    • familiar choices

3) User control and freedom
    • exits from mistaken choices, undo/redo, not forcing people to fixed paths
    • go back in a process

4) Consistency and standards
    • same buttons in the same locations (eg on popups)
    • consistent names

5) Error prevention
    • prevent data loss (previews, confirmations, don't  use "no" or "cancel" for continue in popups)
    • prevent clutter
    • prevent bad input (date field with widget, not free-type, helpful form input errors)

6) Recognition over recall
    • create interfaces that make objects, actions, options and directions visible or easily retrievable
    • any time the user has to use a post-it note the relevant information is probably not available to him as he is doing something with the UI
    • lead with reasonable defaults for filters, don't show empty lists/tables
    • show previews

7) Flexibility and efficiency
    • provide keyboard shortcuts for experts
    • push ambient info (weather in calendar app) but be relevant
    • recommendations (eg similar products)

8) Aesthetic and minimalist design
    • core info above the fold (no scrolling)
    • signal-to-noise (every bit of the UI has to mean something)
    • provide all the info in a clear and uncluttered way (e.g. login+sign up form at the same time with a ratio button for 'new customer' which uncovers password + other info input)

9) Recognize, Diagnose and Recover from errors
    • make the problem clear, display helpful error message
    • offer problem solutions/recovery
    • propose an alternative (e.g. offer a relaxation of constraints when the search does not return any results)

10) Help
    • help and documentation quality
    • provide examples (e.g. for possible searches)
    • popups/tooltips/callouts/highlights for new functionality

Representation matters

  • representation of the problem should enforce/visually embed the constraints of the problem (offload working memory which can keep 2+-2)
  • naturalness principle: properties of the representation match the properties of the represented thing
  • never forget: integrate the necessary step with the step that is easy to forget
  • world in miniature (e.g. print dialog with miniature page preview, miniature seat containing controls for positioning a real car seat)
  • show what is necessary for the particular task and nothing more
  • representation should enable operations users want to do
    • "fitness to task" (comparison, exploration, problem solving)
  • distributed cognition (offloading the user memory/brain cycles) can:
    • encourage experimentation (visual experimentation is better than just the user imagining stuff in his head)
    • scaffold learning and reduce errors through redundancy
    • show only differences that matter (e.g. London subway map is a scheme of routes, but is not geographically correct)
    • convert slow calculation into fast perception (visualize if we can use the visual recognition skills to process the information better/automatically, e.g. when deciding text vs visual representation in map coloring/labeling/annotation)
    • support chunking (group individual bits of an interface into a coherent chunk that takes up less cognitive effort to work with/remember)
    • increase efficiency (e.g. diagrams)
    • facilitate collaboration (eg pilots can create marks in the airplane speedometer and this marks trigger various events for given speeds across the whole crew of a cockpit)
  • informational equivalence (same amount of information is deducible from a representation)
    !=
    computational equivalence (it takes the same amount of effort to deduce the same information)

Visual design

  • whitespace - chunking the design
  • size contrast, typographic variation
  • 3 key goals:
    • guide (convey structure, relative importance, relationships)
    • pace (draw people in, help orient, provide hooks to dive deep)
    • message (express meaning and style, breathe life into the content)
  • 3 basic tools:
    • typography (e.g. hierarchy and content structure)
    • layout (e.g. navigation on the left/top)
    • color (e.g. highlights for attention drawing)
  • typography aspects:
    • point size (height of the lead block the letter was set in, not the actual letter height)
    • leading (stripes of lead between lines, usually leading = 0.2 * point size)
    • x-height (height of lowercase letters, typeface with high x-height is easier to read at smaller point sizes or display resolutions, typeface with low x-height are more elegant)
    • weight (light, regular, bold)
    • serifs (unconfirmed: longer text reads easier with serif typefaces)
  • what typeface?
    • longer reading - familiar typeface
    • logo - more exotic that stands out

Grids, Grouping, Alignment

  • grids favorited by newspapers
  • when creating templates, design for the longest text block - often the actual text is longer + translations to German are super long
  • left-aligned text is faster to skim
  • tips:
    • avoid slight misalignment
    • when you deviate from a pattern, do so strategically
    • use visual proximity and scale to convey semantic information

Color

  • design in the grayscale first
  • start by using scale/layout, then add luminance (gray value) variation
  • in the end, add some color for redundancy

Information Scent

  • can people find the information they want?
  • poor scent: 
    • flailing/mousing around without clicking
    • low confidence
    • lots of use of back button
  • error examples: surprising categories, short links, hidden navigation, icons provide no additional information
  • icons help when:
    • they facilitate repeat recognition (recognized at the first sight, or easily remebered and retrieved on a second usage)
    • the user knows how something looks but not how it is called
    • they provide redundant coding (users recognize either the picture or the word)
  • improve scent:
    • use longer specific link titles (+ explanations) with trigger words (e.g. things people are looking for) in user language
    • speaking block navigation (navigational element is composed of a 1-2 word catch-phrase + a sub-heading which explains the element)
  • design for glanceability, screen priority:

1123
1122
2223
3333
  • prime page real estate: above the fold, where other pages put similar content, places where there are usually no ads
  • secondary page real estate: only if the content in the primary areas is good, people are happy to scroll
  • interlaced browsing (users browse multiple sites at once, email, other activities) so pages should support "context switching"
    • make the text more concise
    • make the text more scannable by making the paragraphs shorter, using bullet lists and subheading
    • less marketing language

Designing studies

  • please the experimenter bias: don't ask "Do you like my interface?", "Of course."
  • What's the comparison? What is the yardstick? (what do we compare against)
  • good techniques:
    • base-rates: How often does X occur?
    • correlations: Is there a relation between X and Y? (e.g. order of the search results and CTR)
    • causes: Does X cause Y?
  • terms:
    • manipulations: user action independent variables (i.e. in control of the experimenter)
    • measures: user action dependent variables (e.g. task completion time, accuracy, recall, how does the person feel about the task completed)
    • precision: internal validity (e.g. if you run the experiment again, will you get the same thing?)
    • generalizability: external validity (e.g. does this apply just to my tested group or to the target user group I have)
  • strategies of comparison between old and tested and new approach/system:
    • insert your new approach into the production setting (proxy server, client side scripting, some users are routed to the old service some to the new service and then the measures are compared)
    • recreate the production approach in your new setting (e.g. if you just have a prototype of the new thing, re-create a prototype of the old system on the same level of fidelity to compare apples to apples)
    • scale things down so you are just looking at a piece of larger system (this isolation allows you to approach the implementation fidelity of the old system with your new system)
    • when expertise is relevant to target users, train study participants up
  • example:
    • vacuum cleaner
      • manipulation = vacuum cleaner type (old/new)
      • measures = Speed, cleanness

Assigning participants to conditions

  • between subjects design:
    • assign half to 1 manipulation class, another to 2 manip. class
    • what if the predispositions of people influence the outcome?
  • within subjects design:
    • everyone uses both manipulation classes
    • what if the ordering influences the outcome?
  • counter balancing:
    • like within subjects, but each half has a different order in manipulations
    • minimize learning by having a different task for each manipulation (e.g. with vacuum cleaner, have the participants clean different types of buildings)
  • individual participant differences?
  • 3 manipulation classes:
    • Latin square ordering [participants x manipulations]
      (order is different and also rows are balanced)

123
231
312
  • all assignments should be random! (e.g. with website A/B testing, make sure visitors are randomly assigned too)
  • pre-tests and counterbalancing:
    • design a test to measure some relevant attribute of study participants (e.g. typing speed)
    • in assignment, make sure the attribute value is evenly distributed (i.e. random assignment is not enough, because the variation of avg. typing speed can be very different for small sample sizes. you can e.g. sort people by typing speed, take consecutive pairs and assign 1 person from the pair to each manipulation class randomly)
    • online counterbalancing based on a threshold (make sure the above & below classes have the same number of users for both manipulations)

In-Person studies

  • advantages: see points of confusion, discuss
  • think aloud (prompt people to keep talking)
  • decide ahead what things you will help with
  • process data (observations of what users are doing/thinking/saying)
  • bottom-line data (the measures..)
    • don't use elapsed time measure while "think aloud"
  • debriefing (discuss afterwards, explain what you were trying to find out etc)

Web experiments

  • ramp-up (start experiment at 0.1%, go to 50:50)
  • measure what matters
  • run it for long enough to bridge the unfamiliarity gap
  • random assignment:
    • consistent (given person sees only 1 version all the time)
    • independent (run both A and B at the same time)
  • use multiple methods together, helps to generalize

Comparing rates

  • what does my data look like? (graph, plot)
  • overall numbers? (mean, std dev)
  • are the differences real? (significance)
  • significance (chi-squared): SUM (observed - expected)^2 / expected
    [for each possible value of observed]
    • use a value table
  • null hypothesis: opening bid neutral, expects average (in the experiment we try to falsify the null hypothesis)
  • continuous data significance tests: T-test, ANOVA

No comments:

Post a Comment