Tidy is our answer to an everyday office task: an AI that can tell us when a meeting room is in order and ready to use. But with so many different definitions of what an ordered room looks like, how could we develop a visual recognition system that works for all use cases? The answer is that Tidy doesn't know in advance how a room should appear. Instead, users teach the system, simply by showing it the room in "tidy" and "messy" states.
With Tidy, we wanted to demonstrate a different approach to using AI, focusing on an intuitive experience and responding to in-context needs.
AI systems can recognize objects in images with extreme precision. But many times, though, what’s more important is the capability of the recognition system to work for and adapt to highly particular situations. This is the case with Tidy. This system can be freely adapted to multiple industries and scenarios - knowing when to reorder stock or replenish a shelf in a retail establishment, or detecting specific anomalies in a production line, for example. Tidy's unique approach is the root of its potential: a flexible visual recognition system that can be quickly and easily customized by any user to work for their own specific situation.