You are probably familiar with Elon Musk’s plan to build a high-speed train between Los Angeles and San Francisco. First discussed in 2012, the hyper loop was described by Musk as a fifth mode of transportation that would be a cross between a Concorde, a rail gun and an air hockey table. Traveling an average speed of 600mph, passengers on the California hyper loop would make the trip between LA and the Bay Area in 35 minutes. I think we need a hyper loop for cyberspace too.
You might ask – Isn’t the Internet fast enough? For most end-users it is. If you can stream House of Cards in 1080p it might be just fine. But if you are trying to transfer a 20 GB video file to another user or another company, the Internet can be painfully slow. It can take hours or sometimes days to transmit a multi-gigabyte file multiple hops across the Internet.
Broadband providers and content distribution networks (think Akamai) have made tremendous strides improving for “one to many” scenarios in which one company is sharing the same content with hundreds, thousands or millions of users. But many large files are only shared once between two individuals or two companies.
Who exchanges very large files? Who doesn’t is probably a better question.
- Movie Studios shoot raw format which then needs to be edited and reformatted before the final product can be distributed to movie theaters. Raw video footage shot in IMAX or 1080p resolution can result in very large files.
- Health Care professionals need to share medical images (MRI, X-Rays) with specialists at other offices or hospitals. These high resolution medical images are often multi-gigabyte files.
- Pharmaceutical companies need to submit massive amounts of documentation and clinical trials data to regulatory agencies for approval to sell new drugs. Sending these files over the plain old Internet can take days in some cases.
- Marketers need to exchange print advertisement files, video content and other artwork with agencies and freelancers. An individual graphic might just be a few megabytes, but collections of artwork/video could reach terabytes in size.
- Telecom Carriers exchange Call Detail Records listing the trillions of phone calls made from landline and mobile phones every day. Imagine the size of a file with a line-item listing of every call made or received for a company with 50,000 employees.
As we move from a gigabyte and terabyte world into an exabyte and zettabyte world, the problem of exchanging large files will only get worse. The Internet of Things will bring massive new amounts of data to be exchanged. Sensor readings from industrial equipment will record temperature, pressure, humidity, acoustics and other environmental variables. These data files will be batched up and transmitted from end-users to manufacturers and service providers. Image and video resolution will expand from 4K and 5K to 8K and 10K. Holographs, augmented reality and 3D images will become main stream formats.
There are a number of options for exchanging large files today. None are ideal.
FTP is a popular option. But FTP does not perform well sending massive data files over the congested public Internet. It was designed for smaller file transfers.
File sync and share platforms such as Dropbox and Box.net can work in small volumes. But they are not designed to support the frequent exchange of massive files.
Many companies upload files to a secure website where the recipient can go to download. This approach can work, but it is not scalable for a large number of files. Users have to keep checking the portal regularly for updates then download each file one-by-one.
Others use old-fashioned techniques such as copying files onto DVDs and shipping them in the mail. For example, Amazon Web Services users that want to upload massive files are asked to ship a hard drive via UPS, FedEx or DHL to the appropriate data center.
Specialized “file acceleration” technologies from companies such as Signiant and Aspera are the best approach on the market.
What if we combined these file acceleration protocols with a dedicated physical network designed specifically for large file transfers? This network, or digital hyper loop, would flow through some of the largest Internet Data Centers which have a concentration of connections directly to end-users. As adoption grows a network effect would emerge. More and more companies with high volumes of large file transfers would start to co-locate their servers and storage equipment in data centers that sit on the digital hyper loop. Being in these data centers will allow senders and receivers to plug directly into a port on the router on the hyper loop. File transfers that normally took hours or days to be accomplished in minutes (or seconds).
What do you think? Should we build a digital hyper loop?