Interacting with computers requires us to translate our thoughts into concrete actions. Our thought is “I want to remember an appointment,” but our actions may be “open Calendar, find the date, drag the mouse over the time you want, enter the title and other fields, click save.” We’re translating our thoughts into clicks and keystrokes.
Much of user experience research boils down to “how can we make this translation process as simple as possible?”
Graphical User Interfaces are carefully tuned to reduce this translation overhead by making use of common patterns. Natural Language Interfaces (like Siri) seek to reduce this overhead even further by understanding the language that humans already use to communicate. For some cases, this works great and provides an effortless alternative to clicks or taps.
However, we’ve all had that awkward experience of trying to give Siri a long command and blanking halfway through. We seemingly forgetting the syntax of the language we use all day long. Why does this happen?
It turns out that communicating with Siri still involves a translation process which is it is at least as complex as the graphical alternative. Our mind doesn’t process thoughts in perfectly structured sentences. Concepts and ideas float around with loosely defined relationships. Some thoughts we picture as images rather than words. Some people lay thoughts out in an imaginary multi-dimensional space. Our brains need to work hard to map this chaos into a string of words. Doing this for an impatient robot sometimes proves difficult.
The Solution?
So does Lacona solve this problem? Maybe a bit.
Unlike traditional GUIs, Lacona allows for dramatically increased fuzziness. You don’t need to think of your contact’s last name to find them in a sorted list. You can type parts of their first name, or their last name, or their initials, or their relationship to you, and Lacona will do its best to figure it out. The text-based approach requires less hierarchy and precision, but allows access to dramatically more data.
Unlike voice interfaces, Lacona is patient. I don’t need to know exactly what I want to do before I start typing. I can pause and think—or even click away—before continuing my query.
Perhaps even more importantly, Lacona is order-agnostic; I can enter data in the order it comes to mind. Perhaps my spouse reminds me of a doctor visit that may not yet be on my calendar. I type “Doctor” into Lacona. As it returns no upcoming Calendar results, I determine that it must not be on my calendar. From there, I can press return
to add “Doctor” to my shelf. Perhaps then I want to add the date before it slips my mind, and add that to the shelf as well. Only then do I decide to use the “create event” command and select the appropriate calendar.
Natural Language alone does not allow for this construct. No voice assistant could handle the string “doctor next tuesday create event personal,” but Lacona’s innovative shelf makes it natural. Perhaps next time I perform this task, I go about it in a different order. Minds are fuzzy like that.
This isn’t anywhere close to a direct brain-to-computer interface, but surely it’s a step in the right direction.