cui_spring2021

overview

Accesibility presentations

How the blind use technology to see the world

Austin Seraphin - video

Austin’s website.

Voice User Interfaces - Austin was on the committee for accessibility of these interactive fiction games who has posted a report on their research.

Facial Recognition

facial recognition data points tracking

How does it work? Five step process:

  1. Facial detection / tracking
  2. Facial alignment
  3. Feature extraction
  4. Feature matching
  5. Facial recogntion

Facial recognition in applications
Facial recognition in application interfaces

New facial recognition and machine learning interfaces

Project Euphonia - Helping everyone be understood - video - link to project

Facial Recognition - ethical concerns

What Facial Recognition Steals From Us - video

Human faces evolved to be highly distinctive; it’s helpful to be able to recognize individual members of one’s social group and quickly identify strangers, and that hasn’t changed for hundreds of thousands of years. Then in just the past five years, the meaning of the human face has quietly but seismically shifted. That’s because researchers at Facebook, Google, and other institutions have nearly perfected techniques for automated facial recognition. The result of that research is that your face isn’t just a unique part of your body anymore, it’s biometric data that can be copied an infinite number of times and stored forever. In this video, we explain how facial recognition technology works, where it came from, and what’s at stake. –ReCode, Vox Media

Reading on ethical issues of facial recognition

facial recognition in store
image modified by ACLU, original by colorblindPicaso on Flickr

Examples of Facial Recognition UI

Coding examples with Machine Learning and Teachable Machine

Teachable Machine

Training an Image Classification Model

  1. Collect Data
    • Have 2 kinds of images of something
    • Label those images - (these are called “classes”)
    • how many? - experiment. maybe 25 - 50 images per category.
  2. Click Train Model and Do not switch tabs as it is training in the browser live!
    • This image classification is using MobileNet and a pretrained model (of a Convolutional Neural Network) to do Transfer Learning
  3. When finished, test it. You can add more classes if you like (such as additional poses, or additional images).
  4. It’s good? Now we must save by clicking to Export model. Choose Tensorflow.js and choose to export to the Cloud. (In the future, if you don’t want to upload your model to Google’s server you could save locally). You will get a URL and a permanent webpage to use/test/debug/change your model.

Example starter code

link

  1. Change your model url!
  2. Edit your triggers.js file!

Resources for Teachable Machine

Final Project: Speculative Interfaces

requirements

“Where typical design takes a look at small issues, speculative design broadens the scope and tries to tackle the biggest issues in society.” –Anthony Dunne and Fiona Raby, Speculative Everything: Design, Fiction, and Social Dreaming

Rather than look just at issues of today, speculative design thinking asks “How can we address future challenges with design?”

Propose a speculative user interface for an application. Throughout the course of the project you will propose a concept idea and design brief, create prototypes, test, and document and present.

Your idea can be practical or fanciful, surprising or challenging. It is an experimental interface, pointing forward to a new future.

Keep in mind our design and prototyping processes we’ve covered throughout the semester. You may have to improvise your own new approach for your speculative interface.

Consider our readings and learning from throughout the semester including but not limited to: the early history of early interface design, interface metaphors of the desktop, ergonomics, graphical interfaces, accessibility, voice control, speculative thinking.

For next week, turn in a Design Brief including:

References