The Next Web is Here: Using Computer Vision to Measure the Physical World
Back in the mid-90’s, when the Internet went mainstream, I was director of exhibits at the Computer History Museum in Boston, Massachusetts (now located in Mountain View, California). Everything we take for granted today about the web was cutting edge stuff back then, and every day it seemed that business people would ask me basic questions like “What’s the Internet?” or “Why does my company need a website?”
We’ve come a long way since then, but while the Internet and mobile breakthroughs have focused on making sense of the digital world—tracking things like click-through rates—the physical world has remained (with all due respect) as dumb as a doorknob.
You see, there’s this whole vast horizon of untouched possibilities having to do with the physical environment and the nuances of how people interact with it. And we’ve finally reached the point as an industry where we have the capabilities on the hardware and software development fronts, as well as with data analytics, to open up a new realm of business intelligence, all made possible through computer vision.
Shasta Ventures partner Isaac Roth calls this next opportunity Vision AI, and in a recent VentureBeat article he does a great job sharing his belief in the potential of computer vision to measure things that previously were either: too vast (every dead oak tree in California), too expensive (every yeast cell in a culture), or too subtle (changes in a person’s gait that suggest a medical condition).
Computer vision cameras have been available for years, but analyzing the video always required a connection to the cloud. And because of issues with cost, scale and bandwidth consumption, you could never do it at scale. It would kill a company’s network in no time, and burning through 1000 hours of a cloud service (e.g. Amazon Rekognition charges $1 per 1000 inferred images) is so costly no company could ever provide a profitable service.
With the availability of edge computing, visual data can now be analyzed on the device itself, eliminating issues around bandwidth and cost. And with advances in machine learning and the availability of high-quality computer models, you can fine tune machine learning algorithms that capture precisely the data you need about the physical world.
The possibilities are practically limitless for how computer vision could give you a fresh look (pardon the pun) at how to save money and increase revenue. The owner of several fast food franchises even installed cameras in the trash dumpsters to encourage better employee recycling practices and help cut down on the costs of hauling waste.
With precise data about your physical space and how people use it, you can identify significant areas for improvement of your business, such as aligning staff levels to more closely match store traffic levels, modifying store layout to improve customer engagement, or adjusting HVAC levels based on real-time building occupancy rates.
At its most basic level, computer vision is about replacing the human view with computers, and it will be as disruptive and pervasive as the web and mobile. Just as when the web came out and people started to ask how their websites were doing, computer vision finally has the opportunity to give people real-time insights into their physical spaces performance. Think Google Analytics, but for the real world.
In my next post I’ll talk about the obstacles preventing widespread adoption of computer vision, and what we’re doing to remove them.