My course, New Technology started last week at Reykjavik University. The course is now broadcast live with hangouts, and all videos posted both to YouTube (automatically after hangout) and audioslides are in my New Technology Channel at Vimeo.
The objective of this course is to look at innovations and technology trends, learn from history, and using theories of innovations to study lessons and try to see patterns so we can evaluate new technology currently emerging and interpret the impact.
In the course we look at how to keep up to date on technology trends. In particular we will look at communications, wireless devices, mobile phones and the TV, home appliances, the Internet and other consumer devices. The course will discuss what future trends will emerge, which standards and companies will be successful, and the effects that the technology will have on society.
The world is constantly changing and reinventing itself. One of the driving factors of change is technology and the rate of this change seems to be increasing. Companies that have solid business models suddenly find themselves struggling as consumer behavior changes. Suddenly good management practices are not be enough. If fact, most companies fail because they resist changes and don’t adapt quickly enough to technological disruptions. Technology is what drives businesses, provides growth and new opportunities.
But do we really understand how technology changes and how it impacts businesses and people? Why do we find technology so unpredictable and difficult to understand? Why do businesses fail to take opportunities of new technologies and loose their market to newcomers?
Take as an example phone giant Nokia that dominated the phone market, taking 57% of the profits 2007. Only five years later the profits are gone and now Apple, a company that didn’t even make phones until 2007, generates 66% of the profits. How could this happen?
As it turns out there are lots of studies and theories of technology on how technology evolves over time. In fact some things follows a remarkably predictable path of evolution while other developments are highly unpredictable. By understanding these we can start to evaluate technology and even predict how technology will evolve and how it will disrupt our lives.
Here is the first video from Lecture L01 Introduction:
And here is part 2:
Given the hype for virtual reality of late, one might think that this immersive form of reality is entirely new innovation. But the history of virtual reality (VR) goes back many years. For example, The respected computer scientist Ivan Sutherland experimented with what he called the Ultimate Display in 1965. Also, in the 1990s there was a growing interest and investment in VR but none of that went mainstream. Finally, in 2016 we are seeing some real progress in Virtual Reality. 2016 is the year of VR.
Here is a news report from Primetime Live from 1991:
It is funny to think back to 1991. The dominant PC was based on Intel 386 and the newer 486. The most used operating system was MS-DOS, while Windows 3.0 was gaining popularity. The closest you could get to a virtual reality experience was to go out and rent a VHS video tape and watch a movie.
It is not surprising that these early VR system never took off, they were crude and low performance, and the price tag was from 50-200 thousand dollars, not a very attractive consumer price point. This a classic example of adjacent possible: The technology was simply not ready. The computing power was not fast enough, displays were too crude and low res, and everything was still too expensive.
Why should we think it is different now? In a panel on virtual reality at Slush Play in Reykjavik in 2015, the main concern of the panellists was their worry that the current VR wave would be yet another disappointment. So many times had we seen hope for VR only to experience disappointment.
What is different now is that the enabling technologies seems to be ready. The required components are now mass-produced at low cost, thanks partly to the smartphone revolution. Even with low-end smartphone mounted headsets we are seeing promising experience. Thanks partly to video games, the processing power required is here.
Most people think VR is for games. But VR has two things that makes it so much more than video games. First, VR goes straight to your “lizard” brain. I tried Oculus Rift early in 2015 and in one scene I stood on top a skyscraper in New York. Looking down I saw traffic. I knew I was safe in a room, but my brain sensed danger and nothing I did could change this. VR messes with your brain. Second, VR is much more than games, it can be so much more. Think movies, education and just any experience.
The main problem though is that a real immersive VR experience requires huge computing capacity. According to Bloomberg Business, only 1% of the PCs shipped in 2016 will be good enough to run VR. Facebook recommends for Oculus Rift, graphic cards NVIDIA GTX 970 or AMD 290 equivalent or greater, and CPU Intel i5-4590 equivalent or greater.
Despite this, there exist technology that is definitely ready for VR to work. While the cost is less than of the 1990s price tag, it is still expensive. That would suggest that rollout to consumers will be slow and might take years. But it could also be an opportunity for PC makes to sell VR-ready machines. In a smartphone obsessed market, this might be a welcome boost to PC makes. Also, if VR is so great as many people say, investing in the required hardware is a no-brainer.
Or could we even see the return of arcades again, now VR arcades? I doubt that, but looking back at computer history, if the technology works but is too expensive, economic models will be created to allow access. Examples are the time-sharing machines in the 1970s and the Video game arcades of 1980s.
The real danger though is lack of good content. We still need to figure out what works in VR. This is a classical question on every new platform to date. When the iPhone App Store became available it took some time to sort out what worked. Same with PCs, same with Windows, and any new major platform.
Given the amount of VR startups that presented at Slush in Helsinki in September 2015, we don’t have to worry about lack of interested parties with ideas for a killer app. The potential of VR is enormous, and it is well beyond video games. It still remains to be seen and 2016 is definitely the year of that VR gets a lot of attention.
Winner of the TechCrunch Disrupt 2015 Startup Battlefield that took place in London in December, was a small startup from England called Jukedeck. They presented a solution that can help you create music for videos. The create part is up to an algorithm that will automatically generate the music according to your preferences. You can specify type and spirit of the music as well as length. The real news here is that people now have access to a machine learning algorithms can generate music. Forget the music videos part: algorithms are now composing music. This is just one of the many examples of the new generation of machine learning algorithms we have seen emerge in the last few years. We are in the early stages of a new era of machine learning.
Creating an algorithm to generate music is not new. It turns out that Prof. David Cope at the University of California at Santa Cruz experimented with computer music at the end of the 20th century. He specialised in classical music, in particular Johan Sebastian Bach. By 1997 he had perfected his algorithm so it could generate sufficiently good composition to be able to fool the general listener. Maybe not a masterpiece but sufficiently good.
In 2012, Google announced their research into finding cats in videos. It may seems like a stupid problem to solve but it is actually a breakthrough in computer science. Ever since the introduction of the first computers – ironically called “electronic brains”, people have been fascinated by their capacity to act like a brain. However, computers are far from anytime like the brain. If you use a computer to multiply 1,000 three digit number, even an old 1960s computer would be by far better then a human. It turns out that what is hard for us humans is easy for computers, and what we find easy is hard problem for computers. If you ask a two year old kid to point to a cat on a picture, it would be an easy task. But give that problem to a computer, it’s remarkably difficult to solve.
Despite the fact that problems like image recognition and creative task such as composition are hard, it has not stopped people from working on machine learning or Artificial Intelligence (AI) in general. In the 1980s there was a promising new wave of AI technology called Neural Networks. Those are networks that in some ways try to act similar to the brain, with nodes and connections between them. The the idea was to train the network to get better at specific tasks. Although promising at the time, these networks did not deliver much and became a disappointment. Yet another AI winter followed, not so common in the quest for intelligent machines over the years.
Neural Networks may not have worked in the 1980, but since that time we have seen exponential growth in compute power, storage and bandwidth. Now we have cloud computing and big data. Furthermore we have video games and for video games we need Graphical Processing Units or GPUs and this means we can build really powerful supercomputers, relatively cheap. This is the adjacent possible for a new era of AI. It turns out that the basic idea of neural networks was not wrong, but the capacity to make it work was not available in 1980. Really good example of adjacent possible.
Prof. Cope’a algorithm was a programmatic way to generate specific type of musical composition. The new type of algorithms we see today, like the Junkbox service, works in totally different way. Machine learning algorithms are trained, not programmed. Part of this is deep learning algorithms that are fed with huge amounts of data and use layers of nodes to try different combinations, strengthening those that work and repeat. One class is Recurrent Neural Networks which seem to be able solve particular types of programs like speech recognition, understanding handwriting and, surprisingly, composing music.
Even with all the knowledge on machine learning available, creating a machine learning software is really hard and requires huge infrastructure. The technology is very much academic but is starting to produce practical solution that will open up new levels of possibilities. Big technology vendors are democratising machine learning and offering relatively easy and affordable access to machines learning software. Google has their Prediction API and Amazon has their Machine Learning Services. Access to machine learning systems is now as simple as signing up for subscription on the web.
So what does this mean? This means that apps we use will get smarter and work better for us. They will be able to predict out preferences and help with many problem only capable of humans. Services like speech understanding, pattern recognition, personal recommendations, document and image categorising, fraud detection, and all sorts of creative task will be done by software. It will mean a shift in jobs as software can increasingly replace some tasks, previously only capable of humans. At first we will find this scary, but then, as usual get used to it, and expect some smartness of all things, including everyday objects like cars, TVs and coffee machines. We expect to be able to talk to these things and they talk back.
We are still in early days of this machine learning renaissance and we have a lot to understand what this means for business and people’s jobs, in particular white collar jobs. With the access to enormous cloud computing services, more and more solutions will appear that try to predict and analyse our behaviour. More and more tasks will become software task and this will change the job market. Companies that want to stay relevant, even non-IT companies, need to think about how software can help them.
Another one bites the dust – TV is getting disrupted by the Internet Last year I did a survey in my New Technology class asking about video rentals (see So,.
Last week, the annual EL/WLA Sports Betting Seminar took place in Marrakesh, Morocco. Betware got the opportunity to speak and I did a lecture titled Opportunity knocks… The talk was.
At a the resent Nordic game conference in Malmö I noticed that not many people carried laptops. Instead they had tablets, mostly iPad. Few speakers also presented using a tablet..
Tech news over the Easter weekend were dominated by one device: the over-hyped iPad. Every major and not so major tech source has done a review of this thing. Opinions.
It was the year of the iPad, social networks, Farmville, Android, and Wikileaks Social was the term to describe 2010. Social networking, social media, social gaming, social everything. As usual.