E.M.M.A. (Short Film)
As a sci-fi fanatic and lover of short films, I immediately clicked over to view the film after reading the description. My post has spoilers, however, so if you might be interested in watching a film about robot testing, you can do so below. It’s 14:20 minutes.
I went into this blind, which I rarely do: I only read the description, I didn’t look it up on the Web until after I watched it… Therefore, I didn’t know what to expect or what it was about.
The film starts out E.M.M.A., pronounced “Emma”, staring outside the window; when the scientist enters, she walks—robotically—over to her desk to prepare for testing. E.M.M.A. wants to ride the horses, but she isn’t allowed. Though she’s a robot, her wants made me hurt for her—she desires freedom and the ability to do what she wants, but she can’t. There isn’t much reaction, if any, in her face, and I like that. Too often, I read about how creative consultants “need” to see facial reactions to determine one’s emotions and reactions, but I feel like they invest in that too much. Communication is more than what is “between the lines”1, and physical reactions don’t always match up to what the communication class books teach.
But she’s a robot, but it’s also so relevant, because Aspies are often called robots or aliens, so it’s not too far-fetched.
Moving on, the assistant, at first, comes off as the result of a poorly-cast actress—the emotion feels fake, or too firm.
It’s slow and boring, and I was prepared to call it the worst short film I’ve ever seen, but then it reached the end—it didn’t answer all the questions I had, or everything I would want answered, but it ended, and it made sense. The description had me hoping for something else, and that is wherefrom the disappointment stems.
Whereas the second generation of robots are programmed to not want to know their purpose or remember anything from the previous day, the third generation of robots are too emotional… and it was the third gen that was really being tested.
I suppose, though, it brings up the question of whether we will coexist with androids in the future, provided they don’t pose threats—and whether they would be allotted the freedoms. I’m aware that, with programming, they are not “supposed” to feel or want things if they are specifically programmed not to do that, but… what if that kind of behavior seeps through the cracks, anyway?
On an off topic, yet also related, note, it’s too bad Almost Human was cancelled. 🙁
- Whatever the hell kind of purpose between-the-lines statements even serve. ↩