The way we interact with computers has gradually evolved over the years. Early computer systems used punched cards and printers. Then we went to terminal based command line interfaces, then to more simple menu based systems still using command lines. The office desktop metaphor came with windows, icons and folders. First web sites where primitive and much more simpler and limited then the dominating Windows apps. But the web had universal access and became more popular. The second wave of web interfaces, Web 2.0, showed more advanced techniques for user interfaces. First mobile user interfaces were simple menus, usually poorly organised. Then came the smartphone app which is now dominating with cloud computing. But what comes after the app? One trend might indicate where we are heading next and that is conversational user interfaces. Apps of the future might not need to be visual. You just talk with them.
The application that are leading this development are intelligent voice assistants. Apple has Siri on their devices, Google has OK Google on Android, and Microsoft has Cortana on Windows. Amazon has developed Amazon Echo, a small device that can be placed in your home and Google has Home. We are already seeing this trend unfold.
Apple’s Siri has been here for a while and increasingly these system are getting better. When Siri first came out the expected thing happened. It did work remarkably well for simple commands. However, the task of Siri is enormous. You can ask her anything as the scope of subject is unlimited. And that is what people do. Youtube has many videos where people have fun using Siri. Searching “Siri Funny” will give over 500.000 results (11.09.2016).
Two adjacent possible trends are worth mentioning. First, processing power of small devices are getting so advanced that processing live speech in real time is possible. The iPhone 7, released in September 2016, has A10 Fusion processor. The A10 is a quad core processor with a clock speed of 2.34 GHz which would fit any laptop nicely.
Second, the AI behind language understanding is getting better at a dramatic rate. In only few years there has been a leap forward in machine learning. With faster and bigger clusters of computers, and with more data and better neural network algorithms, AI applications are getting more advanced and much better. One key observation is that these systems have some sort of a network effect. The more people use the language recognising software, the better it gets on understanding language. What is more, these systems can learn regional dialects and slang.
With these technological advanced imagine a new world of computing where you just have a conversation with apps. Be it travel apps that helps you organise a trip, a street navigator that can guide you (the device knows where you are), legal apps for legal council, psychiatrist that will listen to your perverted thoughts, a doctor app will listen to your awkward and embarrassing problems, sales representative will explain a product and the list goes on. Then add talking with things in your environment. Imagine talking with cars, elevators, coffee machines, automatic grocery store checks, hotel check-ins and so on.
However, we are not used to talk to devices or what? Just as any new technology, talking with devices will follow the law of diffusion of innovation. In a February 2016 User Adoption Survey results by MindMeld (likely US based), some 62% of smartphone users have and are using voice assistant. That is into the late majority of people adopting technology.
Technology moves in strange ways. We learnt how to use a mouse and keyboard and got used to that. Then pressing our touch screen phones. And now we can just talk to these devices. Surely this changes the form factor. What will the phone of the future look like?