This is the fourth and final post on how the Hotel California Scenario for future social, local and mobile media, apps, platforms, devices, and solutions (SoLoMo) is being created. In the Hotel California scenario, the User is the Interface, The World is the Computer, and the Situation is the Network. This post’s short list of companies exemplifies the offerings that will transform our interactivity with the world around and disrupt just about every vertical market in the process.
When I started this four part blog series last year, I referred to the user’s point of view when contending that 2013 will be a pivotal year for a new degree of Social, Mobile, and Local (SoLoMo) solutions to be embedded in our daily personal and professional lives. Others have called this future transformation by a number of titles:
- M2M – Machine to Machine
- M2M2M – Machine to Machine to Man
- The Internet of Things
- Smart Services
- The Contextual Web
- The Sentient World
- The Ambient Web
Referring to the Hotel California use case helps to avoid getting caught up in splitting semantic and technical hairs. Whatever moniker you prefer, 2013 will be the year businesses in just about every vertical market begin to be disrupted by a new form of SoLoMo, changing the way we do a lot in our daily lives.
SoLoMo Practical Use Case Examples
How will it all work? New sensors and devices in the world around you are about to identify entities, record events, send the corresponding data through any number of wireless networks (depending on the situation) to an application that will either generate another event or produce meaningful information sent to user(s) based on pre-learned and/or pre-set preferences. Here are some examples:
- Your favorite ladies apparel store app recognizes you approaching via a number of possible methods and sends your smartphone a route through the store that you could follow to see this year’s new spring fabrics and patterns matching your online social browsing, likes and wants. Expect coupons, credits and gamification to intensify and influence your shopping experience.
- Your glucose levels are monitored in near real time by a device adhered onto your stomach and results sent to your smartphone and then your doctor.
- Your client is unexpectedly arriving in town with nothing to do tonight and an app on your smartphone pulls up a certain seafood restaurant for reservation and ticket availability for the ballgame, given his preferences.
- A parking app knows the class you go to every Monday night and routes you to an open parking spot via mobile as you near your destination.
- The thermostat in your house rises to a comfortable 70 degrees from an energy saving 55 when your car gets within 2 miles of home.
- A smart container might message that the last gallon of milk is about to expire or be depleted, and that information could either update your shopping list or be sent directly to your grocer for fulfillment based on a pre-set contract. Smart containers might be your fridge or a product itself.
- Your insurer messages you that your bathroom scale, or the fitness monitor in your shoe or bike verifies that you qualify for a health insurance discount.
- An airplane mechanic uses Google Glasses to pull up a schematic of the engine he is working on with an app that recognizes the image, and augmented reality allows him to find parts in house, order needed parts, view critical path for estimated repair time, calculate and send a time and materials proposal/bill, and show him a short video of the repair process.
We are now moving from the experimental stage to the commercialization stage of these and many other examples. The reader should not think of these as just consumer apps either. In each case, there are implications to the way product / service providers, governments and other organizations operate and/or market. There will be a wave of pure play B2B opportunities as well. As incredible as it sounds today, the real growth in hardware and related services will not be focused solely on Smartphones and Tablets, as there will be billions of new “devices” in the environment that will need to be designed, built, sold, connected/paired and managed. By way of example, I have listed a few companies that will enable some of the changes in my series of posts below. Think about what they are doing, how they plan to do it, and imagine how the integrated elements will create a new future.
Previous posts in this four-part series have focused on mining the preference databases being built by social media. My intent in the list below is to highlight offerings that will make the physical world around us “smarter” by enabling the networked identification and/or finding of people, things and their respective descriptions and locations at any point in time so that an action or useful data is recorded. GPS and A-GPS do not produce the consistent degree of accuracy needed to pinpoint a user’s location as they enter a particular store. (You have probably noticed the blue dot on your mobile map shifting suddenly from one place to another.) No list could start without the two obvious tech titans leading the market.
Disclaimer: In keeping with the stated intent of this Blog, many of the strategies and goals deliberated are based on the author’s Points of View and NOT on disclosure of any corporate strategies, goals, or other information such as technology partnerships or go to market partnering. They should be considered OPINIONS.
Hotel California Scenario Enablers:
At the top of my list, Google has been preparing their long game to win the SoLoMo opportunity for some time. To reiterate, I have no inside insight and can only speculate on Google’s strategy given their public moves. In my opinion, there are two new foundational elements to their SoLoMo strategy.
The key element is their Google Glass initiative. This week, for the first time, outside developers are invited to code for Glass. Apparently emergence of a Glasses competitor, the VuZix M100 (also Android-based), got Google to launch. I like to say that Google Glass is appropriately named because you can look through glass from both sides, but it is more than that. Yes, we’ve all seen the videos and it promises to enable you to learn and do incredible new things as you look out from the glasses. I have previously noted the emergence of facial recognition. It won’t be long until you realize that not wearing them makes you relatively ignorant, and will put you at a disadvantage. Secondly, most people agree that it will allow Google to learn even more about you and add to your profile, habits and preferences as you live your life. That’s expected, right?
However, there is a possible third point I have not seen discussed in any media, but is an opinion of mine. I believe Google will also leverage Google Glass wearers as a point of reference or vantage point from which to scan the environment around us all, and catalogue the world with its camera in an up to date, searchable, marketable manner. So, rather than assume the time and expense of populating cameras on every street corner in the world, theoretically, Google can have consumers pay (by buying the Glasses) to turn us all into photojournalistic honeybees travelling from point to point picking up the data nectar that Google can then sell back to us through offerings and advertising.
Think of the opportunity of having Billions of users walking, running, skating, biking, skiing and driving around like drones capturing information that Google can use to create revenue. I can be in Queens, and depending on terms of service, see what you last saw on 5th Avenue or beyond – hey there’s a sale at Macy’s. Moreover, if Glasses are truly “always on” (your head), given the algorithm happy legacy of Googlers, it stands to reason that Google will find a way to benefit from enhancing their indoor navigation capabilities as new marketing opportunities.
As one example, I can virtually walk around that Macy’s using a recent user generated video, jumping from department to department. As my gaze is tracked, the video shifts to that angle, focus on that merchandise, and offer relative content via AR. It bears watching how Google Glass will authenticate payment transactions, perhaps by a combination of voiceprint and password, or selecting a predefined image. There are practical hurdles for re-using appropriate videos such as video curation, image stabilization, and splicing. Once the cameras are populated, these become secondary smaller problems to overcome. Imagine the benefit to firefighters and law enforcement who can view video of each location as they travel to the scene.
Beyond data delivery to the user happening through your smartphone, data capture capabilities make Google Glasses paramount in the delivery of a connected world. Yes, you can look through glass from both sides, but you can also have an extra pair of eyes looking out – Google’s eyes. Ironically, the term being “google-eyed” existed long before the company’s googol mathematical reference became its brand. (You can’t make this stuff up.) As a marketer, if I were Google, I would plan a way to control this term in a positive way before others potentially exploit it negatively. According to the USPTO website, last month they received a trademark on “Google Mirror”, which might possibly be how they place a friendly patina on marketing this capability. As mentioned earlier, this is all speculation on my part.
This brings up the second, complementary element to Google Glass which is their recently won patent for identifying objects in videos. It can detect a man from a pole from a car from a motorcycle. Combined with its recently bought Ukranian firm Viewdle, Google now possesses a trifecta of object recognition, facial recognition and augmented reality. This should complete what would be necessary to identify and make searchable and describable everything and everyone, anywhere at any time, in a user-friendly way, maintained up to date.
In a future post, I will offer ideas as to how marketers can use the treasure trove of information that Glasses wearers will discover and be “sharing” about the people, objects, and places around them. Here’s an example. Imagine Lowes offers you a number of entirely new living rooms for you to choose from by using videos of your living room streamed from a Glasses wearing houseguest. They replace the room’s actual elements with proposed new wall colors, pictures, drapes, furniture, carpeting and lighting based on your income, location, and preferences. Seeing how an entire room in your house can be transformed before your eyes has much more impact than imagining a different wall color or couch. It takes the interior designer out of your home and places them in the app. The supply chain impact for this new form of marketing extends back from the free app to Lowes to the contractors, trucking, to manufacturing and raw materials. Any office building, corporate apartment, hotel, resort, nursing home, etc., can be marketed to in this powerful new way.
If past is prelude, Google will open source certain information to app developers, and sell access to customers based on their searches, preferences and lengthy Terms of Service agreements that relatively few consumers read. If this is indeed their strategy, it is critical that Google dominate the Glasses market early on. All of this assumes no black swan event from practical use, such as
- a clinical discovery that having messaging frequencies so close to your head for extended periods is detrimental to your health
- eye strain and headache issues
- a well-publicized rash of violent thefts of people’s Glasses
- overly distracted folks walking into traffic or crashing while driving
- software bugs and component/product issues
- an outcry from privacy advocates, or
- poor marketing (the most controllable piece)
The Blackberry 10 was announced two days ago. When I see Blackberry pursue the Smartphone market it has already lost by re-branding and incrementally improving competitive product performance, it reminds me of Wayne Gretzky’s key to success: you should be skating to where the puck is going to be, not where it is now. The future is integrating the world around us, and Smartphones will be just one element.
Apple competes with ecosystems, and its success may not be measured as much in absolute users, but in the comparative amount of data those iPhone owners consume. Apple is extremely cautious about not announcing upcoming products. So beyond their own facial recognition patents, database of tagged photos, and some rumored fingerprint ID technology and NFC that are not too ground breaking, it is impossible to piece together exactly what Apple’s strategy will be without inside information. Suffice it to say that the visionary folks at Apple should not to be caught off guard with a strategy for the Hotel California SoLoMo future. Having said that, Google appears to have massive momentum in key areas, not the least of which is its proven relative mastery of Google Maps/StreetView. I believe that if Apple does not launch a timely, competitive offering to Glasses it will lose out tremendously.
As mentioned in an earlier post, Facebook bought an Israeli facial recognition company in 2012. It’s not too eccentric to assume they have a strategy to become a major player in this space. A lot has been written about them, I’ll defer.
Koozoo is a venture-backed startup that asks users all over the world to turn their old smartphones into stationary nodes on a shared live video streaming network. My guess is that their founders understand the opportunity afforded by having a platform of eyes everywhere (as related above), and the difficulties in populating the world with devices on their own. My take is that they will have to play up the cool factor of being part of this webcam community and also provide some sort of rewards for people to be bothered. Time will tell what the breadth of strategic vision is with Koozoo, and TechCrunch reports that the founder promises to curate content (that’s a lot of curation!) and restrict the data to public places. Publicly, the intent of this company seems to stop there for now.
If the long-range plan is to use the video feeds for more than what they have stated, Koozoo are on the right track using optical input and not attempting to populate devices that are not paid for. However, using old smartphone technology and cameras for video object and facial recognition would be counter-intuitive. Moreover, I believe their model is still over reliant on behavioral changes, and so would be a far inferior go to market strategy than the Google Glass model that I predicted above. If they can pull it off, it might have value as a secondary option.
Most technology enthusiasts have already heard of Nest, a next generation thermostat that makes your heating more efficient by learning your preferences and managing your room temperatures. They are undertaking new directions in smart home management as well. Comcast Infinity, ADT and many other providers are targeting this market.
With Tagstand’s free Task Launcher app, users can create and use NFC “tags” to automate tasks by swiping an NFC enabled phone near a tag that you’ve placed in a physical location and, like magic, your phone either takes an action or its settings are reset per your preprogrammed wishes. With TaskLauncher’s NFC tags, you no longer need to navigate menus or find the right chiclets to do things like activating a music app when you get into a car, enabling Bluetooth or WiFi when you get home, or turning off your ringer and on your vibrate when you enter a conference room. TechCrunch reports that in the three month period ending June 23, 2012, 1 Million actions were executed. As of date of this post, the iPhone had not adopted NFC, but many Android devices have. Wired reports that Samsung has its own similar NFC tag offering. Samsung TecTiles are sold in packs of 5 for $15 allowing you to send a Tweet, a text message or a standard phone call just by tapping the small tag. A drawback today is that relatively few smartphone models offer NFC, but indications are the chips will continue to be populated especially due to their security aspects for mobile payment processing.
StickNFind bluetooth stickers can be adhered to things you might lose, like your car keys or wallet, to recover them if they are misplaced. Using a tracker app available for both iOS and Android smartphones, your lost assets can be found within approximately a 100 foot range. You can also set up a short range geo-fence around your smartphone with an alert tone if the asset is removed from that zone. The app presents a cool radar screen that allows the user to home in on the sticker, and can be paired with up to 20 stickers. When selected, these stickers also flash and make a buzzing sound to be found. The stickers are the size of a quarter (US $0.25 piece) and lightweight. A drawback today is that many people disable Bluetooth in order to save on battery life (I do).
The potential ubiquity of any consumer driven location based service (LBS), i.e., where the consumer has to tap NFC, swipe a card/device, or photograph a QR code to make their presence known, will always be in question. You are leaving it up to the whims of individual users whether to go through the motions to create the LBS by adopting new behaviors. I need to buy tags, wait for their delivery, program the tags to do different things, affix them in various places, then later continually remember what each tag does, locate it, pull out my phone and perform a tap or swipe. Some will find it exciting, and others could not be bothered. I do misplace my stuff around the house, however.
ByteLight enables a smartphone or tablet to be tracked using the camera in the device to detect light patterns that are unique to the lights is provides for a given venue. Each light is programmed to emit a frequency pattern that is invisible to the human eye, but can be captured by the device. This is especially pertinent for indoor navigation of large areas, like malls. You can see where your related parties are on an indoor map through their app. As is the case with other models, this clever idea enables findability and trackability. ByteLight also allows you to drop content specifically in a physical location, so that permitted users can only see that content or an alert when they are in that area. I can see this being attractive to museums, for example. Perhaps the most attractive part of the value proposition is that unlike other forms of indoor navigation, there is no need to install any equipment or wiring – just change the light bulbs to ByteLight and use existing sockets. Byte Light provides an SDK to developers. An obvious drawback to this model is that given that a camera is needed to detect location, how reliable is it when a smartphone is in your pocket or handbag, or your tablet is in your backpack? Radio frequency would be a better solution.
RetailMeNot has quietly built up a big footprint by activating geo-fences around the 500 biggest malls in America, as reported by the New York Observer. Any shopper with the app is immediately alerted to sales in that mall when they enter the mall area. The company has significant online couponing pedigree and resources to make this a long-term play if they can get shoppers to download the app. Most likely, the geo-fence uses GPS and/or A-GPS, and this would be an example of using the appropriate technology for the right purpose. This capability could not be used to differentiate presence in a particular store or department, as some others do on this list, but its big footprint can still provide consumer value and would have the advantage of being the first app to activate, and so might direct a consumer to a particular mall store. I would try it!
Shopkick has name brand investors, and lists a number of tier one retailer affiliates (traction). The Shopkick app provides what it calls “kicks” as rewards for purchasing goods in affiliated retailers, and those kicks can be returned for free stuff, like a latte. (their example) The business model appears to be undergoing some modification at the writing of this post, but the value for my purposes is highlighting the unique method used to identify when individual users arrive at affiliated retailers.
As mentioned earlier, GPS and A-GPS do not produce the consistent degree of accuracy needed to pinpoint a user’s location as they enter a particular store. (You have probably noticed the blue dot on your mobile map shifting suddenly from one place to another.) Many stores are under cover in indoor malls as well, or shaded by skyscrapers in urban areas, or are at different vertical heights in a building. Despite tweaking, GPS is ineffective in these situations. Shopkick places audio frequency emitting units on either end of a retailer’s doorway, or department, and as you enter, the Shopkick app recognizes the individual user and the personalization begins (or rather, continues). This model requires someone to go to the store, install and test the units in the physical location. It would also seem to require the mobile device owner to have the device continually seek / listen for the frequency of the units. If so this would affect battery life. I am also unsure if users would like to have their microphone on all the time. I am unsure precisely how this identification method works, but I would hope that a GPS geo-fence is used around a participating retailer so that the microphone is only turned on when the user is in the close vicinity of a participating retailer. Still, this would require the app to continuously run in the background as a smart service that is polling to detect when the user is in the vicinity. Smart services will become more prevalent.
I believe that models requiring the service provider to perform hardware installation in order to detect users’ presence will prove to be inferior to other models, unless that location information can be sold / shared with other companies as an LBS platform. The immense challenge is the funding and time it takes to populate all of the devices everywhere and to get the platform accepted as a standard means of presence identification. Even Skyhook Wireless levered millions of users’ existing WiFi in order to create an LBS. Other means such as having users themselves drop tags, change light bulbs, or populate cameras for facial recognition require no such service provider hardware installation, and are more promising.
ALMAX is an Italian mannequin producer that has come up with a new line of business it calls “EyeSee Mannequin”. As mentioned in my last post, Digital Trends reports that retailers like Benetton are using ALMAX embedded mannequin cameras in the US and Europe to track customers without their knowledge. It identifies not only your gender and age, but also your race. While the product still does not identify you uniquely, a voice recorder is being developed to eavesdrop on your in-store conversations as well. This is being done to provide the shopper with better in-store recommendations and provide the retailer with better analytics to improve the customer experience. If facial recognition were to be incorporated behind these cameras, potentially with an app linking your smartphone or use of digital displays ala the Minority Report movie, this would be another example of transformation from generic mass buying analytics to individual preference learning and personalized shopping experiences.
Tobii eye tracking promises “to analyze vision, human behavior, user experiences and consumer responses” according to their website. This data will be used to infer a consumer’s preferences from metrics related to their gaze: how and where their eye focuses, how long it dwells on an object, and behavioral indicators attributed to human eye functions such as pupil size and dilation. Once calibrated, it can recognize where your eye focuses on a computer screen via near-infrared mini-projectors that bounce light off of your eyes, optical sensors that captures data for processing according to mathematical models. The site mentions other options such as mobile eye tracking where a camera is attached to a pair of glasses (Google anyone?), long range tracking to analyze your TV viewing, and remote eye tracking from afar using motors. I have often focused on an object while thinking of something completely different. You have to put your eyes somewhere, right? Combining this capability with Google’s video object recognition and the the inferences about my interests would still be placed in my database as part of my preferences. Thanks, but I don’t even like anchovy pizza!
It should be mentioned that the good folks at Tobii are also applying their capabilities to matters such as early disease identification and assistive learning.
SceneTap is an app that allows you to monitor average age, gender and size of crowds in restaurant/bars to help you choose where to go. They propose that patrons will be happier by ensuring they go to a “scene” that meets their expectations, and SceneTap offers restaurants the analytics on patrons. Perhaps in the future, a Wall Street analyst will want access to discovery of patronage analytics of a public, or soon to be public, retailer in order to discern trends and help make a call on the stock.
How do they do it? SceneTap places 2 cameras near the bar entrance, and bars do not alert patrons they are being tracked because the facial identification tunes out your exact image in favor of general identifying features. However, they have applied for patents to cover the ability to cross-reference your image in real time with social databases to determine your occupation and income, and also your criminal history. Another US startup, Redpepper, is targeting the same market space under the brand Facedeals. Facedeals is built using the Facebook graph API and offers tailored rewards to those who adopt the app and are facially recognized. I believe that this is the tip of the iceberg, and there will be considerably more like these 2 startups.
This emerging Cambridge, MA company is pioneering things like flexible circuitry in plastic that adheres to your skin to sense heart rate and other physical data to be read by RFID. MC-10 eventually want to create invasive body sensors. The potential for m-health is tremendous and the compelling value proposition will be as much in systemic cost reduction as it is better individual health. It would take an entirely new series of blog posts to do justice to SoLoMo opportunities related to this vertical. Here’s a quick video to give you a taste of the future.
mHealth: Medicine Meets Mobile
Integrating different vendor capabilities, such as facial recognition, Tobii’s gaze tracking and Google’s video object recognition would produce entirely new fields of personalized data capture. For marketers, these would waterfall from predictive analytics engines through to location-based digital ads and couponing. Making this integration seamless would take some effort, but you can “see” where all of this may be headed. (pun intended) While potential partnering efforts / integrations like these are still all speculation on my part, imagining them sheds light on the possibilities to make our lives more real time, informed, productive and convenient. If they make you apprehensive, read the rest of my series of posts.
Blog Series RECAP – The Hotel California Future of SoLoMo
To recap this series, in the first blog post, 2013 Smartphones Are So Last Year When I Can Program the World with my Face, I likened lyrics from The Eagles Hotel California to a future SoLoMo scenario where the User is the Interface, The World is the Computer, and the Situation is the Network. The second post suggested that your smartphone passcode lock will not matter if access is provided to the databases that personalize your digital experience. The third post explored legalities and practicalities of privacy rights, emerging use cases, and possible outcomes.
After writing each of these, I have seen more and more concrete examples of many of my points in the marketplace. Some of these can be found in the links provided at the end of this post.
Finally, in this last post, I listed a number of companies who are pioneering the coming transformation to the Hotel California future. If I was right about 2013, you are about to read about an entirely new wave of innovative SoLoMo companies and capabilities that will shift the focus of mobility, social and local away from just your smartphone to the world around you. I hope that readers have gained a good sense of my views on where SoLoMo is headed, and that my examples prove that this is not a distant future, momentum is building and it is happening now. If you have other examples, ideas, or questions, feel free to comment and share this post as widely as possible.
Other Relevant Links :
2008 : Future of Google Glasses: The Cyborg Mind
4/2012: The Rise of Smart Mobile Services (Not Apps!)
1/2013: Hack turns the Cisco phone on your desk into a remote bugging device
9/2012: Facebook Can ID Faces, but Using Them Grows Tricky
8/2012: Google lands patent for automatic object recognition in videos, leaves no stone untagged
1/2013: Obama Signs Amended VPPA Into Law: Netflix Users Can Now Share Viewing History On Facebook
1/2013: Key Takeaways from the California AG’s Mobile Apps Report (privacy best practices in mobile app ecosystem)
10/2012: Cybercrime: Mobile Changes Everything — And No One’s Safe
1/2013: (Facebook) Instagram Asking for your Government Issued Photo IDs Now, Too
1/2013: Actual Facebook Graph Searches (I saved perhaps the best for last)
- Top 5 New Tech Purchases You NEED in 2013 (epicagear.com)