First and second steps
At first there were desktop computers:
- either at home or at the office (later also on the go with notebooks),
- desktop apps present complex information in 2D (or projected 3D) graphical format on a large screen, with sound, and
- users can input data and commands with a mouse and a keyboard, that they initially need to learn how to use.
Then we had the smartphone revolution:
- everywhere, even on the road,
- as powerful computing components got small enough,
- mobile apps can show fair amounts of information also in 2D format on a small screen (later also larger “tablets”), with sound, and
- users can easily input data and commands with natural multi-touch gestures (and lately, also voice),
- while they can continue using their desktop computers whenever mobile devices do not support certain features or scenarios (such as for development).
One can think about the roadmap of Amazon Alexa and similar voice-enabled devices:
- either at home or at the office (later probably also in vehicles),
- with help from artificial intelligence development,
- voice apps can provide information in sound format using human language even without a screen (later probably in more languages, and with optional touch screen accompaniment), and
- users can input a fair amount of commands naturally with their voice (and optionally some touch gestures),
- while they can continue using their desktop computers and smartphones whenever voice-enabled devices do not support certain features or scenarios or when they are not around,
- but unless people want to get away from smartphones themselves (like the roadmap article linked above suggests) – something that isn’t possible when they are on the road without a vehicle (I guess) – there is little actual need for these devices since smartphones can easily turn into voice-enabled devices anyway.
Finally, we have Microsoft HoloLens and similar devices of the future:
- everywhere, even on the road,
- as powerful computing components get even smaller, and with help from artificial intelligence development,
- holographic apps would present very complex information in 2D and natural 3D augmented reality format directly projected on eyes, with sound, and
- users can input a fair amount of commands naturally with air gestures and their voice, and textual data and more commands with a Bluetooth connected keyboard,
- so they could eventually get rid of their smartphones and later even their desktop computers,
- but the device needs to become more powerful, yet lighter and comfortable enough for end users, foldable to fit into pockets, and also get highly improved battery life (technically difficult), and end users need time to get used to wearing such glasses instead of touching phones (which would be very difficult too, I guess), and supplementary, air gestures may turn out as being just too difficult to be performed all the time by people, compared to using multi-touch screens in this regard.
As you can see, the final points of both possible directions indicated above are fairly big, so I don’t think a truly revolutionary wave 3 is going to happen soon, myself. Instead, I tend to consider these devices as interesting accessories, maybe even opening new niche markets that developers can embrace (just like smartwatches), but not more for now.