A concept drawing showing a finished device installed in a kitchen.
A new London startup wants to make controlling a smart-home more natural and intuitive. The startup, AI Build, is making a home-hub prototype that it says will make turning on a light as easy as asking your mate to get up and do it.
It plans to do this through the introduction of he addition of ‘teachable’ programming and visual input methods to instruct a home-hub. Current models rely on voice or the use of apps for input of instructions. But AI Build’s prototype will be the first to include a range of cameras as well, according to Daghan Cam, cofounder and chief executive officer of the startup.
The device will attach to the ceiling of a room in a connected home. It will have an array of six cameras each covering 60 degrees to give it a 360 degree perspective of the room. The in-built computer can then be taught where objects are in the room, to recognize certain people and to respond to a range of motions and gestures.
“The idea is to make home automation as easy as asking a friend to turn on a light,” he explains. “You’d ask your friend and point at the light you’d want to activate. Compare that to what’s currently on the market. Currently a mobile app takes multiple steps to activate. You unlock your phone, you open the app, you tell it to turn on a specific light. This is more natural. Instead of using a mobile or a remote control, you use existing skills and natural language.”
But of course the addition of cameras to a smart home device brings in an array of new privacy and security concerns. Cam says the company is attempting to minimize these by keeping data local and encrypted. The system uses its own computing power to do most of the work, which means all of the voice and image related data stays in the computer. This differs from many similar apps and devices which send data to more powerful central processors, Cam adds.
But this then creates its own problems. “It’s the biggest investment we’ve done and is what makes the device so expensive,” he adds. The startup is thinking about ways to minimize this in the future – for example through the creation of molds that can be used for plastic injection. Currently the casing is done by large-scale 3D-printing but a plastic-injected mold system could be more efficient.
The company has used around £100,000 in initial seed money to assemble a team and create the workings of a prototype. Much of the programming and design has been done but the initial prototype, while functioning, is still in a series of pieces, says Cam. The team hopes to have a fully functioning and assembled version together in the next few months.
Once this is achieved it will be used to as a demonstration model in the hopes of attracting further funding as well as for the creation of a crowd-funding campaign. The firm will attempt to sell the first 100 units on Indiegogo at £750 a pop. This money will be used to finalize the initial device over the following year, says Cam.
After that the plan would be to sell more devices at somewhere between £900 and £960 – with an anticipated initial yearly revenue of between £900,000 and £960,000. “We’d be mostly concentrating on making customers happy. We realize there are going to be unanticipated problems with the product so the strategy is to talk directly with customers about improvement then to focus on getting bigger,” he says.
From there the company would hope to start selling under a business-to-consumer (B2C) and business-to-business (B2B) model – with the real growth and money in the B2B side. “You’d be looking at selling 1,000 products at a time to developers building new complexes,” Cam explains. “Eventually the long-term vision of the company would be to change the construction industry by bringing artificial intelligence to the built environment.”
This would include combining artificial intelligence and large-scale 3D printing to create better more, integrated homes, he adds. But first – to fully assemble that working prototype.To facilitate this the AI Build hub will use algorithms to give itself reinforcement learning. This means over time it should be able to pick up the natural gestures and voice idiosyncrasies of frequent users. Users can customize it to respond in certain ways to specific gestures – although it will come pre-programmed with a standard set of commands for most functions, Cam says.