Yesterday, The Wall Street Journal published an informative article called Forget ‘the Cloud’; ‘the Fog’ Is Tech’s Future, where they brought up the notion that cloud computing is restricted by available bandwidth over 3G/4G and in the home, and (unfortunately) the United States ranks 35th in the world in terms of bandwidth per user.
When we incorporated the company four years ago (actually four years, one month, and eleven days ago), Weihan and I left our PhD and Masters studies to pursue an idea: We believed that although cloud computing is becoming quite popular, resource constraints like bandwidth will result in businesses and individuals seeking a more distributed approach to sharing data (particularly large data).
During our earliest investor pitches, one of the questions that we continually came up against was *”how did you come up with your company name?”* which generally resulted in me launching into a short story:
We believe that today’s idea of cloud computing will evolve. Today, we view the cloud as this far away and opaque entity. We send data into the cloud, and forget about it. However, as the amount of data we create grows exponentially, so will our bandwidth requirements to share this data. Because of these requirements, we’re building a company that looks to the future, where instead of using the far away cloud, we will be using the underutilized resources of devices in the “air around us”, so we called the company Air Computing, Inc.
So of course Weihan and I were quite flattered when reading the WSJ article and seeing that Cisco’s marketers came up with a similar conclusion while calling it fog computing:
Whereas the cloud is “up there” in the sky somewhere, distant and remote and deliberately abstracted, the “fog” is close to the ground, right where things are getting done.
Four years ago, we had a an uphill battle explaining the need to businesses but today this is no longer the case.
As businesses create more and more data internally across a variety of devices, they realize that while their externally available bandwidth is usually heavily utilized, their internal available bandwidth is greatly underutilized. In networked offices and homes, the available bandwidth on the LAN usually greatly exceeds the available external bandwidth.
For example, although few of us have 100mbit or 1Gbit Internet connectivity, most of us have 100mbit/1Gbit routers and switches in our offices and homes, and even our Wi-Fi connections are often faster than the available bandwidth of the ISP they ultimately connect to.
This means that if you need to share data with someone a few floors above you, a few desks away from you, or in the office down the street, using the public cloud is usually the wrong solution.
This also means that while four years ago few organizations cared about our distributed device-to-device architecture, today we win a lot of business specifically because of the fact that we can sync data at the speed limited by the switches and the routers (e.g. 60-80mb/s) as opposed to their saturated internet connectivity.
Decades ago we went through a trend of ‘thin’ computing not unlike today’s trend of cloud computing. Instead of the cloud, we used mainframes that hosted all of our data. Then, as computing requirements expanded and latency grew, the pendulum swung and we moved to doing more computing on the devices themselves. Today, we are living through a similar swing back in the other direction. And although it seems like the idea of ‘centralizing’ in the cloud is winning, we expect the truth is somewhwere in between, because at the end of the day, we’re all users of the largest distributed/decentralized environments in the world: the Internet.
Ultimately, whether this new emerging trend ends up being called private cloud computing, hybrid cloud computing, fog computing, or air computing, does not matter. What excites us is that more and more infrastructure-oriented businesses like Cisco, HP, and IBM, are joining us in this journey where four years ago we were alone.
— Yuri & the AeroFS team