The 5 technology breakthroughs that didn’t make money for their inventors

Thanks to Tim Berners-Lee’s 1989 invention of the World Wide Web, information spreads around the globe in seconds, instead of days.

Without it, breaking news could take hours, if not days; we wouldn’t hear from the man on the ground, governments wouldn’t be overthrown. Your average person wouldn’t be able to rise to meteoric fame in a matter of minutes. The world as we know it would be fundamentally different.

The internet and the World Wide Web are known entities. Those breakthroughs have been heralded as world-altering numerous times by multiple publications and figureheads. The internet may have been a revolution, but it certainly didn’t bring us mobile phones, electric cars or the Egg Cuber.

But what about the ground-shifting technological breakthroughs that fewer people remember? Some have simply been lost in the annals of time, while others are on the bleeding-edge of technological research, waiting for their time to shine.

That’s why we’ve set out to fix this problem, giving five incredible technological breakthroughs from the past and present the limelight they deserve.

Read on to find out exactly what the 5 are.

———————————————————————————–

1. Xerox Alto & Xerox Star

The 5 most significant tech breakthroughs that didn't make money for their inventors - Alto/Star

Think of Xerox and you’ll think of copiers, printers and other slightly-mundane objects scattered around the office. But without Xerox, it’s unlikely we’d have the Mac, or Windows, or the computer as we know it today.

Back when computers were hulking beasts displaying no more than a few tens of characters on a line, Xerox was feverishly working away on one of the biggest breakthroughs in modern computing. Known as the Xerox Alto, and built in 1973 primarily as an experiment for its Palo Alto Research Centre, the computer never gained much recognition due to a small commercial rollout at an incredibly high price.

However, the technology the Alto contained was the start of something completely new. With the Alto, Xerox created the fundamentals of image rendering on computers. Through an imaging innovation known as BitBLT, images – called bitmaps – could be formed from binary information and then overlaid to create windowed environments or interactive elements.

Because of this technology, the Alto played host to the first WYSIWYG word processor and rudimentary video game projects. It heralded the future of computer design and created the foundations for the Xerox Star to build upon and thus is the hook every graphical user interface we use today hangs off.

xerox_alto

The Star, released in 1981, featured ethernet connectivity, network printing, shared directories, internetwork routing, contained a “File” function and came bundled with a WYSIWYG editor as standard. The Star was completely ahead of its time.

Unfortunately, it was stratospherically expensive. Costing a staggering $16,000 a unit – $41,000 in today’s money – a business user would need at least two or three Stars to see any commercial benefit. For comparison, the more common Commodore VIC-20 cost just $300 per unit, or $771 in modern terms.

The Alto and the Star both missed their turn in the limelight, despite the incredible technological breakthroughs they made. While price certainly held them back from mass appeal, Xerox’s success was stolen away by a young and enterprising Steve Jobs.

Having visited Xerox’s PARC site in 1979, Jobs became interested in the Alto thanks to its graphical interface and use of a, then, unusual mouse. His trip spurred Xerox into sharing its expertise with Apple’s Lisa team, helping them understand how it had created a user interface, why bitmaps were important and where it believed the future of computing was headed.

Let’s be clear here, this wasn’t a business partnership, Xerox effectively gave its secrets away for absolutely nothing. It even gave Apple an Alto and a Star to dissect. Come 1983, two years after the Star released, Apple shipped the Lisa.

The 5 most significant tech breakthroughs you probably don’t know about - Apple Lisa

The Lisa was, yet another, commercial flop due to its price, but the features inside went on to inform the Apple II and cemented Apple’s dominance for technological innovation at the time. It effectively commercialised Xerox’s breakthrough, and didn’t owe them a single penny in doing so.

Since then the basic technology behind GUI hasn’t changed drastically. And now we use graphical interfaces for almost every single interaction we have with a screened electronic device.

The layouts may have changed, one being Microsoft Bob’s abysmal attempt at something new, but ultimately the windowed environment of the Xerox Star, and its early GUI has gone on to inform graphical interfaces for almost every single interaction we have with a screened electronic device.

Think of it this way, without Xerox’s breakthrough, would we have the World Wide Web as we know it today?

 
 

2. The CCD, Kodak’s DCS and the rise of digital cameras

digital_photo

Without Kodak, digital cameras wouldn’t exist as we know them today. Your quick selfie in front of an iconic monument wouldn’t be shareworthy, and without digital cameras there’d be no need for Instagram to exist either.

Kodak’s Digital Camera System has enabled war photographers to deliver breaking news images to the world in close-to real time. News outlets are able to broadcast high-definition images live around the world. The news channels we now take for granted just wouldn’t exist as we know them today.

Like many of the creators of early technological breakthroughs, Kodak lost out on the prize at the end. The DCS creator goes largely unremembered by society, and it all began thanks to another – more recognised – invention: the Charged-Coupled Device.

Most people know that the CCD helped facilitate the advent of modern photography, with its inventors receiving a Nobel Prize in Physics in 2009 for their groundbreaking 1969 discovery. However, it was Kodak’s innovation with early CCD technology that really made digital imaging technologies rival those of traditional film.

first_ccd

Stumbled upon by Dr. George Smith and Dr. Willard Boyle, the CCD came about when the two were experimenting with semiconductor technology. Putting Einstein’s Photoelectric Effect into practice, a CCD allows for the conversion of electric signals into computational data.

While the CCD was certainly a technological marvel, it wasn’t until a young Steve Sasson picked up on the technology while at Kodak that it really had its first true purpose. Here, over the course of a year he turned a simple 100 x 100 pixel CCD from Fairchild Semiconductor into the first 0.001-megapixel camera.

Weighing close to 4kg, running on 16 AA batteries and taking 23 seconds to record a black and white digital image onto a cassette tape, Sasson had created the first digital camera. From there Kodak went on to develop a colour CCD, but despite the breakthrough they were beaten to market by Sony in 1981.

However, Sony’s Pro Mavica wasn’t truly a digital camera, instead it was an electronic analogue still camera. It used proprietary two-inch floppy discs to store images, and could never provide a high enough resolution to please the professional or consumer market. Kodak still had a chance.

In the 1980s, Kodak led charge on the digital revolution by putting together a crack-team of engineers to develop a groundbreaking megapixel digital camera. The team, which included Sasson and Kodak’s senior project engineer and chief designer of Kodak’s professional cameras, James McGarvey, knuckled down and produced an early prototype in 1986. After extensive testing by The Associated Press photographers in 1987 and 1978, this prototype eventually became the Kodak DCS, or Digital Camera System.

Released in 1991 as the first commercially available digital camera, it contained a 1.3-megapixel camera and fitted onto the body of an off-the-shelf Nikon F3 SLR. Costing $20,000, or $34,000 in today’s money, it’s clear the DCS wasn’t designed for anyone but professionals.

digital_camera_creation_dcs

It allowed photographers to take either colour or monochrome photos, storing up to 600 images on an attached 200MB “portable” 5kg hard drive with built-in preview screens. Thanks to an ethernet connection, images could be sent across telephone lines back to editors desks, or directly to an Apple Mac to be edited. It sped up the time it took for images to reach news desks, and thus the time it took for a news story to break.

It may not have been a commercial success, selling around 920 units, but there was enough interest for Kodak to develop the DCS 200 for release in 1993 and then various units thereafter.

The success of the DCS changed the still image and video camera industry, all the big camera developers were clambering over each other to go fully digital. Nikon, who Kodak had eventually formed a relationship with around the release of the DCS 400 in 1994, essentially took the technology for themselves, bundling it into a less-bulky frame.

It didn’t help that Kodak higher-ups were making odd business decisions instead of moving wholeheartedly into the, then relatively open, consumer digital camera market. Kodak attempted to capture consumer interest with the Apple QuickTake 100, but a lack of Kodak branding – on an entirely Kodak-developed camera – meant it just didn’t make a mark for the company.

By then, it was too late. The consumer sector had been eaten away at by larger camera manufacturers who could develop more affordable digital cameras. Digital cameras had begun to spring up in mobile phones and, when Apple launched its iPhone in 2007, smartphones quickly began to close the gap on dedicated digital cameras.

With Kodak having essentially cannibalised it’s successes in the professional photography industry, its reservations on the technology it created led to it losing out in the commercial sector too. Sasson and co were just too far ahead of the curve for Kodak’s higher ups, by the time the old guard had been turned over to fresh-faced innovators, it was too late. Kodak filed for bankruptcy in January 2012.

However, what it’s left in its wake is the biggest breakthrough in imaging since the world’s first photograph by Joseph Niépce in 1826. It levelled the entry point for photography, and now literally anyone with a mobile phone made in the last ten years can snap a photo in seconds, and share it with the world within a couple more.

 
 

3. The blockchain

the_blockchain

In 2013 Bitcoin was the word on everybody’s lips. This magical online currency sprung out of nowhere and into the public eye, exploding into a fierce digital stockmarket. A year on and the fervor had only just begun to settle. But, thanks to heavy investment, the world would never be the same again.

It’s safe to say that, while not ubiquitous, Bitcoin and other cryptocurrencies have gained a foothold in modern society. While most still have no idea exactly what a bitcoin is, or if it needs a capital letter or not (Bitcoin is the network, bitcoin is the currency), the general public is aware of its existence. A quick look at Coinmap also shows it’s creeping acceptance as a form of payment at hundreds of UK businesses.

Don’t go converting your Sterling into Dogecoin just yet, crypocurrencies still aren’t a threat to exchange rates. However, the technology that underpins it all could have a huge impact on our everyday lives. If Bitcoin is the outward face of cryptocurrencies, then the blockchain is its beating heart.

Without a doubt, the blockchain is the biggest internet-based breakthrough in recent years.

When used in cryptocurrencies, blockchain technology creates an infallible record of all transactions as a decentralised log. It’s mind-achingly complex when you get down to the nitty-gritty of how it all works, but thankfully we have a simple guide to what the blockchain is.

blockchain_binary

The blockchain’s current use isn’t terribly exciting, certainly not the sort of thing to get many clambering over themselves to make use of it. However, the potential of blockchain technology is nerve-tinglingly exciting.

It’s something completely new, completely unseen before in technological history. It’s a complex, practically infallible system. It’s a digitised community logbook, except nobody holds the keys, responsibility is shared and everything is validated by those who support and add to the blockchain.

The technology could be used to send and receive data securely, with each computer automatically verified by a unique ID that can’t be copied or decrypted. This means official documents could be validated digitally, emails could be sent securely and websites wouldn’t ever need passwords to log you in.

A browser-based blockchain could be used to verify the legitimacy of websites, effectively replacing the cumbersome standard of TLS/SSL certificates to create super-secure browsing environments. It’d mean the end of debacles like Superfish or Privdog, and reduce the possibility of database hacks taking services offline.

The only real potential spanner in the works is finding a reason to get people to use the technology. Because blockchains are decentralised, they can only be sustained and verified if people use them and support them.

Bitcoin creator Satoshi Nakamoto was smart enough to hang the temptation of financial gain off his groundbreaking creation to spur people to use it. For other applications it could be tricky, but if it succeeded, the potential applications of blockchain technology are limitless.

 
 

4. Graphene

graphene

If Bitcoin is magic internet money, then graphene is just witchcraft.

Formed of carbon atoms in a honeycomb lattice, graphene is around 100 times stronger than steel of the same thickness, incredibly conductive of both heat and electricity, and because it’s just one atom thick it’s practically transparent. Essentially, graphene is the super material of the future, the super-material which every science-fiction writer has described in the far-off future.

However, until two scientists from the University of Manchester managed to extract the first single-atom-thick crystallites from bulk graphite in 2004, useable graphene was little more than a work of science-fiction.

In 2013 a £61 million National Graphene Institute was founded at the University of Manchester to continue research into the commercial benefits of graphene manufacture. This was complemented by the establishment of the £60 million Graphene Engineering Innovation Centre (GEIC) in 2014. With over the UK government investing nearly £40 million into the future of graphene, it’s clear the potential application of graphene has researchers and manufacturers in a frenzy.

ngi

Graphene products could last far longer, be easy to recycle – due to everything being made from carbon – and rather rapidly the world of plastics could be replaced with a world of graphene.

Processors built from graphene would mean smaller computers and better wearable devices. Graphene batteries could be built so thin and so light they’re able to be woven into materials, and could hold more charge than anything currently available and still be safe to wash. And graphene could even be used to create strong, lightweight and flexible screens for use in ultra-thin and tough portable devices.

Used in planes or automobiles you could create very lightweight and extremely energy efficient vehicles. A lighter plane could fly further on the same amount of fuel, or be strong enough to survive most crashes. Graphene is so versatile that it could be used in something as simple as paint capable of weathering the harshest storms or to give lubricants and oils extra strength.

A Minority Report-style future could really be possible, although we hope both Precogs and Tom Cruise get left by the wayside.

But why, if manufacturers and developers are so interested and prepared for a world of graphene products, haven’t we seen a single graphene device come to market?

Unfortunately the only downside to such an otherworldly material is its otherworldly cost.

Currently one of the most expensive materials on Earth, in 2008 the price of manufacturing a piece of graphene no larger than the cross section of a human hair was well over $1,000. One square centimeter of graphene would set you back $100 million, meaning that even Apple could only manufacture 172 square metres, or just 2.4% of a football field, of the stuff before it drained it’s deep cash reserves dry.

Thankfully prices have dropped since 2008 and, while exfoliated graphene still remains expensive, epitaxial graphene has dropped to around $100 per square centimetre.

Britain is hoping to be the first major manufacturer of graphene in the world, hence the hefty investments. Already two commercial manufacturers exist in the North East of England to help realise a world of graphene technologies.

While nobody really knows how to do anything with it just yet, graphene is such an unbelievable, versatile discovery, we just couldn’t not put it on our list.

 
 

5. The Touchscreen

touchscreen

To many, the touchscreen didn’t become notable until Apple rolled out its first iPhone in 2007. This was the future, a mobile device packing power and features far beyond its contemporaries, and it contained a touchscreen interface to boot

However, the touchscreen was in fact created by CERN as a solution to streamlining its computer console interfaces from thousands of buttons, knobs and switches. Touchscreens may seem like a modern way to interact with a computer, but it’s a technology that’s over 40 years old.

First described in an article published in 1968, the first use of a touchscreen came about in the early 1970s thanks to CERN engineers Frank Beck and Bent Stumpe. Their prototype, created from evaporating a very thin layer of copper on a flexible and transparent Mylar sheet, was able to register nine button inputs.

After showing their findings they were granted permission to further improve and implement the capacitive touchscreen technology. By the end of the project, CERNs SPS Control Room had three touchscreen-enabled consoles installed, each one capable of registering 16-button inputs.

Amusingly, Stumpe points out that the technology behind his early touchscreens outlast those of modern day devices, “I have read on Wikipedia that the natural lifetime of the current touch screens is about two years. The ones we developed remained in operation for more than 20 years!” He’s not wrong either, as the touchscreens Stumpe developed were in use until CERN redesigned the control room in 2008 for the LHC.

cern_touchscreen

But why, if touchscreens have been in use since 1973, have we only seen them in commercial products in the last ten years? The technology behind CERN’s capacitive touchscreens required incredible amounts of computational power. Put simply, it’s only recently consumer devices have been able to handle them.

We no longer have 16-button inputs, but screens with the technology to detect multiple fingers. And while it was once the science industry driving technological advancement, it’s now mobile phones pushing for innovation in the sector.

Touchscreens are now commonplace, so much so that a new generation of users know only of a world where touchscreens exist. They’ve become so ubiquitous that in 2012 it was a global market worth close to $13.8 billion. Since then it’s only continued to climb as the smartphone industry continues to grow and their inclusion in consumer laptops increases.

By 2014 the global smartphone market was worth over $150 billion, with Apple selling a staggering 74 million iPhones in its last quarter. It’s thanks to a humble Danish engineer doing his job in the ‘70s that Apple came so close to earning more than the GDP of his home country 40 years later.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.