Affective Computing: Ethics of it all

Ethical and moral dilemmas are not out of the ordinary to computer/ electronic technology. When it comes to affective computing they hit closer to home because it involves the most intimate part of ones existence. When the most intimate aspect of being human is involved, treading with caution is required. Being able to feel, express, and experience emotions (or choosing not to) is a fundamental elements for the living, and a private element at that. When those ultimately private experiences are exposed to the world, what are the implications, what are the ramifications? How do we ensure that data used for affective technology is truly representative? And for those who use or manipulate those emotions, what are the ramifications for them?

Hernandez et. al [3] explores the harm that current technology imposes on those who implement it and its participants (who are sometimes unwilling/ or unknowing) while offering a possible set of guidelines for affective computing. These guidelines take into consideration technology limits and user involvement while offering ways to implement the proposed guidelines. Hernandez et. al [3] explains how that technology that is currently being used is more than likely being used out of context and users are completely unaware of its limitations. Using Hernandez et. al [3] suggested guidance, designers and for those implementing affective technology should clearly state the technology’s limitations while providing a way for the user to opt out of use and providing the user/ subject control over their data.

While some of those limitations may seem harmless or not an ethical/ moral concern as explained by Cowie [1] due to how a fair number of applications are frivolous. Cowie [1] paints more optimistic picture of the affective computing ethical and moral responsibility. While most technology that is being generated can be considered frivolous to one degree or another, these technology’s are not being used without consequence. In Cowie’s [1] argument fails to take into account real world examples where perceived frivolous technology was manipulated by nefarious users. The prime example of affective technology being used to manipulate users is the Cambridge Analytics scandal [2]. This sort of data manipulation and user manipulation has had untold ramifications on the global community.

Even though not directly affective computing, the recent debate regarding the IRS’s implementation of facial recognition, later retracted, has now open the conversation of user data, impact on user services, and what regulations should be put in place. I think we can all agree that regulation and guidance is needed or required pending on application situations to ensure that users remain in the forefront and this technology is not used for immoral or unethical purposes [1,3]. As technology continues to grow and excel at un-measurable rates, ensuring that the equity and accessibility gap doesn’t continue to grow is vital to ensuring the healthy advancement of global communities.

References
[1] Roddy Cowie. 2012. The Good Our Field Can Hope to Do, the Harm It Should Avoid. IEEE Transactions on Affective Computing 3, 4 (2012), 410–423. DOI:https://doi.org/10.1109/t-affc.2012.40

[2] Federal Trade Commission. 2019. FTC Sues Cambridge Analytica, Settles with Former CEO and App Developer. Federal Trade Commission. Retrieved from https://www.ftc.gov/news-events/press-releases/2019/07/ftc-sues-cambridge-analytica-settles-former-ceo-app-developer

[3] Javier Hernandez, Josh Lovejoy, Daniel Mcduff, Jina Suh, Tim O’brien, Arathi Sethumadhavan, Gretchen Greene, Rosalind Picard, and Mary Czerwinski. 2021. Guidelines for Assessing and Minimizing Risks of Emotion Recognition Applications. Retrieved from https://www.microsoft.com/en-us/research/uploads/prod/2021/07/camera_ready_share.pdf

Leave a comment