The good news, for me at least, is that the computer thinks I have a nice personality. According to an app called MorphCast, I was, in a recent meeting with my boss, generally “amused,” “determined,” and “interested,” though—sue me—occasionally “impatient.” MorphCast, you see, purports to glean insights into the depths and vagaries of human emotion using AI. It found that my affect was “positive” and “active,” as opposed to negative and/or passive. My attention was reasonably high. Also, the AI informed me that I wear glasses—revelatory!
The bad news is that software now purports to glean insights into the depths and vagaries of human emotion using AI, and it is coming to watch you. If it isn’t already: Morphcast, for example, has licensed its technology to a mental-health app, a program that monitors schoolchildren’s attention, and McDonald’s, which launched a promotional campaign in Portugal that scanned app users’ faces and offered them personalized coupons based on their (supposed) mood. It is one of many, many such companies doing similar work—the industry term is emotion AI or sometimes affective computing.
Some products analyze video of meetings or job interviews or focus groups; others listen to audio for pitch, tone, and word choice; still others can scan chat transcripts or emails and spit out a report about worker sentiment. Sometimes, the emotion AI is baked in as a feature in multiuse software, or sold as part of an expensive analytics package marketed to businesses. But it’s also available as a stand-alone product, and the barrier to entry is shin-high: I used MorphCast at no cost, taking advantage of a free trial, and with no special software. At no point was I compelled to ask my interlocutors if they consented to being analyzed in this way (though I did ask, because of my good personality).
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 04 May 2026
28 points (100.0% liked)
Technology
42870 readers
167 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
I can't think of any beneficial uses for such a product.
The only one that I can think of is research or clinical settings. It could be really useful for various social or psychological research or monitoring patient status. That's not how it will be used for the most part but that is a place it could be useful. Like any tool, it can be used for good things and bad (how revelatory...)
The problem with the AI industry (and modern, unregulated capitalism in general) is that as soon as someone has a potentially useful tool, they look everywhere for every possible use with no regard for societal consequences. Thinking about the ramifications of using a tool doesn't increase shareholder value. In fact, trying to only have your tool be used in a positive way actively harms shareholder value. Greed perverts all that is good.
The only application I can see for such research would be to extend and refine the distopian use cases. What else would such research be used for? It will only feed back into the cycle of privacy invasion and the surveillance state.
Impersonal patient status monitoring (beyond vital statistics like heartbeat monitoring which we can already accomplish much more easily) will not have any practical benefit. The most likely outcome is that it will be used to justify reduced nurse staffing.