The Year of Virtual Reality

Los Angeles CA - USA - August 29 2015: Virtual headset during VRLA Expo virtual reality exposition event at the Los Angeles Convention Center in Los Angeles.

Given the hype for virtual reality of late, one might think that this immersive form of reality is entirely new innovation. But the history of virtual reality (VR) goes back many years. For example, The respected computer scientist Ivan Sutherland experimented with what he called the Ultimate Display in 1965. Also, in the 1990s there was a growing interest and investment in VR but none of that went mainstream. Finally, in 2016 we are seeing some real progress in Virtual Reality. 2016 is the year of VR.

Here is a news report from Primetime Live from 1991:

It is funny to think back to 1991. The dominant PC was based on Intel 386 and the newer 486. The most used operating system was MS-DOS, while Windows 3.0 was gaining popularity. The closest you could get to a virtual reality experience was to go out and rent a VHS video tape and watch a movie.

It is not surprising that these early VR system never took off, they were crude and low performance, and the price tag was from 50-200 thousand dollars, not a very attractive consumer price point. This a classic example of adjacent possible: The technology was simply not ready. The computing power was not fast enough, displays were too crude and low res, and everything was still too expensive.

Why should we think it is different now? In a panel on virtual reality at Slush Play in Reykjavik in 2015, the main concern of the panellists was their worry that the current VR wave would be yet another disappointment. So many times had we seen hope for VR only to experience disappointment.

What is different now is that the enabling technologies seems to be ready. The required components are now mass-produced at low cost, thanks partly to the smartphone revolution. Even with low-end smartphone mounted headsets we are seeing promising experience. Thanks partly to video games, the processing power required is here.

Most people think VR is for games. But VR has two things that makes it so much more than video games. First, VR goes straight to your “lizard” brain. I tried Oculus Rift early in 2015 and in one scene I stood on top a skyscraper in New York. Looking down I saw traffic.  I knew I was safe in a room, but my brain sensed danger and nothing I did could change this. VR messes with your brain. Second, VR is much more than games, it can be so much more. Think movies, education and just any experience.

The main problem though is that a real immersive VR experience requires huge computing capacity. According to Bloomberg Business, only 1% of the PCs shipped in 2016 will be good enough to run VR. Facebook recommends for Oculus Rift, graphic cards NVIDIA GTX 970 or AMD 290 equivalent or greater, and CPU Intel i5-4590 equivalent or greater.

Despite this, there exist technology that is definitely ready for VR to work. While the cost is less than of the 1990s price tag, it is still expensive. That would suggest that rollout to consumers will be slow and might take years. But it could also be an opportunity for PC makes to sell VR-ready machines. In a smartphone obsessed market, this might be a welcome boost to PC makes. Also, if VR is so great as many people say, investing in the required hardware is a no-brainer.

Or could we even see the return of arcades again, now VR arcades? I doubt that, but looking back at computer history, if the technology works but is too expensive, economic models will be created to allow access. Examples are the time-sharing machines in the 1970s and the Video game arcades of 1980s.

The real danger though is lack of good content. We still need to figure out what works in VR. This is a classical question on every new platform to date. When the iPhone App Store became available it took some time to sort out what worked. Same with PCs, same with Windows, and any new major platform.

Given the amount of VR startups that presented at Slush in Helsinki in September 2015, we don’t have to worry about lack of interested parties with ideas for a killer app. The potential of VR is enormous, and it is well beyond video games. It still remains to be seen and 2016 is definitely the year of that VR gets a lot of attention.

New Era of Machine Learning

Self Learning Technology with Artificial Intelligence Brain Stem

Winner of the TechCrunch Disrupt 2015 Startup Battlefield that took place in London in December, was a small startup from England called Jukedeck. They presented a solution that can help you create music for videos. The create part is up to an algorithm that will automatically generate the music according to your preferences. You can specify type and spirit of the music as well as length. The real news here is that people now have access to a machine learning algorithms can generate music. Forget the music videos part: algorithms are now composing music. This is just one of the many examples of the new generation of machine learning algorithms we have seen emerge in the last few years. We are in the early stages of a new era of machine learning.

Creating an algorithm to generate music is not new. It turns out that Prof. David Cope at the University of California at Santa Cruz experimented with computer music at the end of the 20th century. He specialised in classical music, in particular Johan Sebastian Bach. By 1997 he had perfected his algorithm so it could generate sufficiently good composition to be able to fool the general listener. Maybe not a masterpiece but sufficiently good.

In 2012, Google announced  their research into finding cats in videos. It may seems like a stupid problem to solve but it is actually a breakthrough in computer science. Ever since the introduction of the first computers – ironically called “electronic brains”, people have been fascinated by their capacity to act like a brain. However, computers are far from anytime like the brain. If you use a computer to multiply 1,000 three digit number, even an old 1960s computer would be by far better then a human. It turns out that what is hard for us humans is easy for computers, and what we find easy is hard problem for computers. If you ask a two year old kid to point to a cat on a picture, it would be an easy task. But give that problem to a computer, it’s remarkably difficult to solve.

Despite the fact that problems like image recognition and creative task such as composition are hard, it has not stopped people from working on machine learning or Artificial Intelligence (AI) in general. In the 1980s there was a promising new wave of AI technology called Neural Networks. Those are networks that in some ways try to act similar to the brain, with nodes and connections between them. The the idea was to train the network to get better at specific tasks. Although promising at the time, these networks did not deliver much and became a disappointment. Yet another AI winter followed, not so common in the quest for intelligent machines over the years.

Neural Networks may not have worked in the 1980, but since that time we have seen exponential growth in compute power, storage and bandwidth. Now we have cloud computing and big data. Furthermore we have video games and for video games we need Graphical Processing Units or GPUs and this means we can build really powerful supercomputers, relatively cheap. This is the adjacent possible for a new era of AI. It turns out that the basic idea of neural networks was not wrong, but the capacity to make it work was not available in 1980. Really good example of adjacent possible.

Prof. Cope’a algorithm was a programmatic way to generate specific type of musical composition. The new type of algorithms we see today, like the Junkbox service, works in totally different way. Machine learning algorithms are trained, not programmed. Part of this is deep learning algorithms that are fed with huge amounts of data and use layers of nodes to try different combinations, strengthening those that work and repeat. One class is Recurrent Neural Networks which seem to be able solve particular types of programs like speech recognition, understanding handwriting and, surprisingly, composing music.

Even with all the knowledge on machine learning available, creating a machine learning software is really hard and requires huge infrastructure. The technology is very much academic but is starting to produce practical solution that will open up new levels of possibilities. Big technology vendors are democratising machine learning and offering relatively easy and affordable access to machines learning software. Google has their Prediction API  and Amazon has their Machine Learning Services. Access to machine learning systems is now  as simple as signing up for subscription on the web.

So what does this mean? This means that apps we use will get smarter and work better for us. They will be able to predict out preferences and help with many problem only capable of humans. Services like speech understanding,  pattern recognition, personal recommendations, document and image categorising, fraud detection, and all sorts of creative task will be done by software. It will mean a shift in jobs as software can increasingly replace some tasks, previously only capable of humans. At first we will find this scary, but then, as usual get used to it, and expect some smartness of all things, including everyday objects like cars, TVs and coffee machines. We expect to be able to talk to these things and they talk back.

We are still in early days of this machine learning renaissance and we have a lot to understand what this means for business and people’s jobs, in particular white collar jobs. With the access to enormous cloud computing services, more and more solutions will appear that try to predict and analyse our behaviour. More and more tasks will become software task and this will change the job market. Companies that want to stay relevant, even non-IT companies, need to think about how software can help them.

 

 

Digital Transformations

DigitalTrans

Economist Events hosted an event in Madrid 4-5th of November 2015 called Digital Transformations. The topic was how digital technologies are disrupting businesses and changing our lives. In fact, I believe that we are experiencing the last years of the world as we know it. There are new technologies coming for example in robotics, artificial intelligence, predictive intelligence and virtual reality to name few, that will have huge impact and shape the 21st century just technologies have in the past. Here are some thoughts about the discussion.

Technology moves fast but the diffusion of technology into our everyday live is slow in comparison. Many of the technologies that will disrupt businesses in the future emerged in the 1990s and 2000s. Just think about the Internet becoming mainstream and the rise of the smartphones. The iPhone is already eight years old. These technologies have changed our lives, sure, but most businesses are still working as they did in the 20th century more or less. Banks, insurance companies, hospitals, schools, and government to name a few. The reason is that technology change is very much a human issue and it takes time to change the way we work and behave.

In my lectures, I have always talked about the digital decade as the first decade of the 21st century. This is the time when everything analogue became digital. The second decade is the transformation decade where the intangible things change. Things like business models, shopping behaviour, ownership, life-style and so on. And this is the decade of confusion and collisions between the old way and new way. Think about the content owner’s war with privacy  and how controversial Uber causing riots and strikes.

However there seems to be a general disconnect between general knowledge of the digital transformation and the few that “get it”. Companies like Google, Facebook and Amazon operate on a scale we have not seen before. At the same time many traditional business are ignorant of the possibilities of digital technologies and face getting disrupted. The conversation is simply not taking place. An important theme echoed at the event was the need to educate leaders on the possibilities of emerging technologies.

Some companies seem to be aware of the trends though. European carmakers like Daimler are shifting their attention to digital opportunities. The odds however are stacked against them. Incumbent companies have huge luggage to carry. They may understand Clayton Christiansen’s disruptive innovation theory, but he actually had another theory which explains why incumbent companies fail. It is called the The Resource, Processes and Values Theory – or RPV theory which states that company’s resources – the people, the processes that state how work is done, and the values of a company define how the company functions. This gets optimised over time and if there is a new opportunity in the market it is difficult for the company to take advantage if it. And if is a cheaper and less profitable version of their existing product, there is strong resistance to change.

Kodak was mentioned as an example. Sure, Kodak enjoyed good success all through the 20th century. By 1975 they had 90% of film sales and 85% of camera sales in US. That would be classified as owning the industry. Kodak had experimented with digital cameras in the 1970s. They had actually built the first electronic camera. However, the technology was crude, expensive and with low quality. And this is always the reaction of incumbents companies when faced with threat: It’s expensive, it’s low quality and nobody wants it. It maybe the case at any given time, but the exponential growth distorts our view. By the time the threat is real it is usually too late.

However, transforming companies in disruptive times is very much possible. There was another company in the same business as Kodak that was not mentioned, and that is Fuji. When they realised the change from film to digital they totally reorganised the company laying off thousands of people. They squeezed as much as they could out the film industry while it lasted by putting cheap cameras on the market. They diversified their chemical operations into cosmetics and sold that unit. While Kodak is an example of a Mammoth that went extinct, Fuji is an example of a company that survived.

Another discussion at the event was the observation of real-time. We have moved to a business that is real-time. If there is a moment, you have to respond to it. Many businesses find this hard. If a customer complains on Twitter, you cannot call a meeting and discuss the reaction. You have to respond now.

Privacy was also a big topic. We are transforming from a very private world into a sharing world. Any move to limit peoples privacy is not taken so lightly. If not done right that is. People seem to be willing to give some of their privacy away if they receive value by doing so. Probably the most private – you health, is not excluded. If you can use some services that will benefit your health, people will accept that. In general, the discussion was that privacy is something we haven’t figured out yet.

Big data also got is share of the discussion. While the definition not so clear it is obvious that data is growing exponentially. Just with the Internet of Things we can expect floods of data. But what is important is the value we get from the data. This is the challenge many business are facing. There are also technical issues as the traditional enterprise systems simply cannot cope with this scale. Not surprisingly, with the internet there was a rise of new solutions, for example databases, collectively called NoSQL database to meet this demand. Hadoop is an example of new solutions to tackle these problems.

Another great discussion was about machine learning. This is about getting machines better by experience. One section of machine learning is deep neural network which have shown remarkable progress in the last few years. Advances in the field of machine learning are likely to have huge impact on the labor market. However it is not clear how this will play out. Will people be replaced? Are new jobs coming instead? How will we retrain all the people? What will the unions do? These are some of the hard questions that people are asking. One theme on the event was that technology is all about augmenting the human. Making people more productive.

Finally, I want to mention Blockchain. It was a reoccurring theme that came up. Understanding blockchain is not so easy as it is not product but rather a byproduct of Bitcoin, a protocol layer, a distributed trust mechanism. Someone defined it as a living organism. At least, Blockchain has the potential to have significant impact, maybe on par with the Internet.

Events like this forward the evolution of technology. It is a about an intellectual conversion about technology trends and their impact. The world will face new challenges due to technology in future. At least some people got together and talked about it.

Picture from Sol, Madrid:

Madrid