A handness prediction program made as a Ethics Technical Leader at Allegheny College via a grant from the MOzilla foundation. The program was used as a part of a course assignment I led which portrayed the biases made by more advanced algorithms that collect our data online.
The QAZPLM Program demonstrates hidden biases in the inferences made by larger-scale algorithms. It does this, indirectly, by asking users to enter as many random letters on their keyboard during four 15-second trials. The program then summarizes the results from each trial, determining what portion of a user's keystrokes were on the left, middle, or right side of the keyboard. With this, the computer then makes an inference regarding what handedness the user is. The user's decisions or unnoticed hand bias will determine the computer's handedness inference. While an incorrect handedness prediction may not seem major, when other algorithms make incorrect assumptions and predictions about a user for saying things like advertisements the implications can become more serious.
Once the program was completed I ran a class activity for a Data Analytics course at Allegheny College, in an effort to portray the ethical implications that come along with using one's data, especially when using one's data to make assumptions about them. This work was completed in my role as an Ethics Technical Leader at Allegheny College, a position which was funded via a grant from the Mozilla Foundation.