Every major technology has a period in which it is dominant. For example, the smartphone is currently the most universally dominant technology we have today. In In 2015, there were 2.6 billion smartphone subscribers and on the average they spend three hours and forty minutes using the device every day. This device is so powerful that people keep it within reach wherever they are. Prior to the iPhone was the Internet and prior to that was the PC. So every major technology has its time. And now it seems that the period of the smartphone as the most innovative and dominant technology is coming to an end.
Of course, we will not give up our smartphones just as we are still using the Internet and PCs. But the period where the smartphone keeps improving as fast has it has is coming to an slowing end. Every product goes through what is called an S-curve where the performance improvements are very slow in the beginning than improvement increase exponentially until the impact of improvements slows down and the product enters a sustaining period or a plateau. It may be replaced by other products or just continue to be used as is. Technology adoption also happens in a way similar to this. When the product is new it is often low performance and expensive. This was the case for the personal computer as it was the Internet and as it was the smartphone. The people that will buy the products at this stage are the early adopters people that believe in the idea and want to be part of it. For example, hobbyists where the early adopters of PCs, people that wanted their own computers. Early adopters of Internet were technical people mostly in education and research agencies.
World Economic Forum talks about the coming fourth industrial revolution, assuming that the third used computer and information technology to automate productions. True, but I think within the third there are many waves of revolution where each creates new opportunities. The first generation of the electronic programmable computers were built in 1947. Over the next eighteen or so years, these computes dominated in government and big corporations that could afford these big machines operated by people in while lab coats. The impact of these machine were dramatic change in work called automation, where thousands of clerks processing financial transactions were laid off as invoices, paychecks and other financial transactions become a record on a magnetic tape read into these machines.
However in about 1965 IBM released the System/360 which was a breakthrough in computer and software architectures. Machines like PDP-8 allowed smaller companies to have a computer and the automation continued. However, these machines were still so expensive that they were not affordable by individuals. The were also maintained by professionals. Writing programs for them was difficult and done my companies that sold or leased them.
About sixteen years later, the PC revolution starts. In 1981 IBM released the IBM PC, a generative computer that created the PC industry. It is very much remarkable that a company like IBM could pull this off. Xerox had built a PC in the 70s but the management decided to do nothing with in. Clayton Christensen’s Resources, Processes and Values can explain why Xerox failed. Their resources (among them people), their processes and their values were in the copying business, not the PC business. Remarkably, it also explains how IBM could build the PC. They created a separate unit, far from headquarters that did not go by the traditional IBM ways of doing things. For example they licensed MS-DOS from Microsoft instead of building their own operating system.
The PC dominated for fourteen years. In 1995, the Internet started to take off. There were several events that led to this. One was Tim Bernards-Lee’s World Wide Web. Other efforts were acts by the US government to allow use of the government funded Internet as the “Information Superhighway” as they walled it. Contrary to common believe, Al Gore actually was influential in getting this through. But perhaps the most important was the introduction of WinSock an API (Application Programming Interface) that allow computers to talk to other computers using TCP/IP. Mircosoft implemented this after several loud requests from corporate clients that had big Unix machines all connected using TCP/IP and then separate network of PCs. They wanted them all connected. After WinSock, a programmer in Tasmania, Australia called Peter Tattam released Trumpet WinSock. This allowed people with PCs to get to the Internet. Many ISPs used this program to allow their customers to connect, including my own company founded in 1993, Margmiðlun.
Then after twelve years, the iPhone is released. The iPhone was so revolutionary that it defined how smartphones should be. It was released just before the Mobile World Congress in Barcelona giving existing smartphone makers no time to come up with an answer. Pictures of concept phones were dominant during the congress.
There is pattern in this. The period of the programmable computers was 18 years. The next period was 16 year, and the PC period was 14 years. The Internet period was 12 years and now the smartphone period has lasted for 10 years. So the length of these periods is getting shorter. This suggests that the next period might be about to start.
Another interesting thing is the control of these revolutions, that is who controls them. In the computer revolution of the 50s and 60s, control was very much restricted to the builders of these machines. The mini computers were more open but with the PC the generative pattern emerges. Anybody can write a program to run on these machines. They were made by people that wanted to break the control of the big computer companies. In fact, many of the early PC machines did not come with so much software. The MS-DOS system came on a magnetic floppy disk and that was it. Similarly the Internet became the network of choice due to its generative nature. Vint Cerf, who wrote the original TCP/IP software, calls this permissionless innovation.
As we approach the end of the smartphone period, we see several potential technologies on the horizon. Virtual Reality is finally here with powerful headsets that are consumer affordable. Augmented reality headsets like the Microsoft Hololens is also coming to the market this year. The Internet of Things is also becoming more dominant with many new products, like doorlocks, thermostats, speakers and so on. Perhaps the most interesting though is artificial intelligence. What is interesting with AI is that companies like Amazon and Google are offering their AI engines and vast data centers to developers of AI, placing this powerful technology in the hands of individuals. Whatever will be the next dominant technology it is likely to be smart, very visual, connected and social.
My course, New Technology started last week at Reykjavik University. The course is now broadcast live with hangouts, and all videos posted both to YouTube (automatically after hangout) and audioslides are in my New Technology Channel at Vimeo.
The objective of this course is to look at innovations and technology trends, learn from history, and using theories of innovations to study lessons and try to see patterns so we can evaluate new technology currently emerging and interpret the impact.
In the course we look at how to keep up to date on technology trends. In particular we will look at communications, wireless devices, mobile phones and the TV, home appliances, the Internet and other consumer devices. The course will discuss what future trends will emerge, which standards and companies will be successful, and the effects that the technology will have on society.
The world is constantly changing and reinventing itself. One of the driving factors of change is technology and the rate of this change seems to be increasing. Companies that have solid business models suddenly find themselves struggling as consumer behavior changes. Suddenly good management practices are not be enough. If fact, most companies fail because they resist changes and don’t adapt quickly enough to technological disruptions. Technology is what drives businesses, provides growth and new opportunities.
But do we really understand how technology changes and how it impacts businesses and people? Why do we find technology so unpredictable and difficult to understand? Why do businesses fail to take opportunities of new technologies and loose their market to newcomers?
Take as an example phone giant Nokia that dominated the phone market, taking 57% of the profits 2007. Only five years later the profits are gone and now Apple, a company that didn’t even make phones until 2007, generates 66% of the profits. How could this happen?
As it turns out there are lots of studies and theories of technology on how technology evolves over time. In fact some things follows a remarkably predictable path of evolution while other developments are highly unpredictable. By understanding these we can start to evaluate technology and even predict how technology will evolve and how it will disrupt our lives.
Here is the first video from Lecture L01 Introduction:
And here is part 2:
Given the hype for virtual reality of late, one might think that this immersive form of reality is entirely new innovation. But the history of virtual reality (VR) goes back many years. For example, The respected computer scientist Ivan Sutherland experimented with what he called the Ultimate Display in 1965. Also, in the 1990s there was a growing interest and investment in VR but none of that went mainstream. Finally, in 2016 we are seeing some real progress in Virtual Reality. 2016 is the year of VR.
Here is a news report from Primetime Live from 1991:
It is funny to think back to 1991. The dominant PC was based on Intel 386 and the newer 486. The most used operating system was MS-DOS, while Windows 3.0 was gaining popularity. The closest you could get to a virtual reality experience was to go out and rent a VHS video tape and watch a movie.
It is not surprising that these early VR system never took off, they were crude and low performance, and the price tag was from 50-200 thousand dollars, not a very attractive consumer price point. This a classic example of adjacent possible: The technology was simply not ready. The computing power was not fast enough, displays were too crude and low res, and everything was still too expensive.
Why should we think it is different now? In a panel on virtual reality at Slush Play in Reykjavik in 2015, the main concern of the panellists was their worry that the current VR wave would be yet another disappointment. So many times had we seen hope for VR only to experience disappointment.
What is different now is that the enabling technologies seems to be ready. The required components are now mass-produced at low cost, thanks partly to the smartphone revolution. Even with low-end smartphone mounted headsets we are seeing promising experience. Thanks partly to video games, the processing power required is here.
Most people think VR is for games. But VR has two things that makes it so much more than video games. First, VR goes straight to your “lizard” brain. I tried Oculus Rift early in 2015 and in one scene I stood on top a skyscraper in New York. Looking down I saw traffic. I knew I was safe in a room, but my brain sensed danger and nothing I did could change this. VR messes with your brain. Second, VR is much more than games, it can be so much more. Think movies, education and just any experience.
The main problem though is that a real immersive VR experience requires huge computing capacity. According to Bloomberg Business, only 1% of the PCs shipped in 2016 will be good enough to run VR. Facebook recommends for Oculus Rift, graphic cards NVIDIA GTX 970 or AMD 290 equivalent or greater, and CPU Intel i5-4590 equivalent or greater.
Despite this, there exist technology that is definitely ready for VR to work. While the cost is less than of the 1990s price tag, it is still expensive. That would suggest that rollout to consumers will be slow and might take years. But it could also be an opportunity for PC makes to sell VR-ready machines. In a smartphone obsessed market, this might be a welcome boost to PC makes. Also, if VR is so great as many people say, investing in the required hardware is a no-brainer.
Or could we even see the return of arcades again, now VR arcades? I doubt that, but looking back at computer history, if the technology works but is too expensive, economic models will be created to allow access. Examples are the time-sharing machines in the 1970s and the Video game arcades of 1980s.
The real danger though is lack of good content. We still need to figure out what works in VR. This is a classical question on every new platform to date. When the iPhone App Store became available it took some time to sort out what worked. Same with PCs, same with Windows, and any new major platform.
Given the amount of VR startups that presented at Slush in Helsinki in September 2015, we don’t have to worry about lack of interested parties with ideas for a killer app. The potential of VR is enormous, and it is well beyond video games. It still remains to be seen and 2016 is definitely the year of that VR gets a lot of attention.
Another one bites the dust – TV is getting disrupted by the Internet Last year I did a survey in my New Technology class asking about video rentals (see So,.
Last week, the annual EL/WLA Sports Betting Seminar took place in Marrakesh, Morocco. Betware got the opportunity to speak and I did a lecture titled Opportunity knocks… The talk was.
At a the resent Nordic game conference in Malmö I noticed that not many people carried laptops. Instead they had tablets, mostly iPad. Few speakers also presented using a tablet..
Tech news over the Easter weekend were dominated by one device: the over-hyped iPad. Every major and not so major tech source has done a review of this thing. Opinions.
It was the year of the iPad, social networks, Farmville, Android, and Wikileaks Social was the term to describe 2010. Social networking, social media, social gaming, social everything. As usual.