Meet Chris Maury, the CEO and founder of the startup Conversant Labs.
A few years ago, Chris was diagnosed with Stargardt’s disease – a type of macular degeneration that eventually leads to blindness. When Chris started looking into the landscape of assistive and voice-based technologies, he was disappointed. He knew that the experience of voice-based interfaces could be better. Furthermore, he realized there was a missed market opportunity: good voice-based applications have value to everyone, not just the blind.
He founded Conversant labs in January 2014 with the mission to make voice-based interfaces easy to use, easy to build, and easy to design. Since then Chris and his team have developed a number of products in line with this mission, including their SayKitSDK and SayShopping iPhone app. Most recently, they released Yes,Chef! – a hands-free, voice-based cooking assistant app. In our interview we talked about the future of voice-based applications, the trials and tribulations of running a startup, and how the “build it and they will come” slogan from Field of Dreams is a lie.
How did you come up with the idea for Conversant Labs?
Well two things happened. I found out I was going blind and I was fired from my job. I suddenly found myself with a lot of free time. I started doing a lot of research into what accessibile technology is and what resources are out there for me. I got a case-worker and started going through that process. I started building a relationship with The Lighthouse for the Blind in San Francisco, and that’s really where I started getting to know what tools and resources are out there for the blind.
So I was immersing myself in this problem and quickly realized how terrible the technology is. It’s all overpriced and the experience of using those products is horrible. And so, that’s how I identified the problem.
And that led you to developing voice-based technologies?
In terms of choosing to build voice interfaces in particular, it was very much a personal choice. I wanted to make it better and improve the process. Right now assistive technologies take a visual interface and translate that into an audio stream. So you take all of the design decisions that have gone into optimizing this interaction for vision, and, in many instances, go against those decisions and try to squeeze the interface into some sort of audio experience. Which just seemed backwards to me. That led me to want to spend my time working on audio-first technologies and designs.
I also had a thought that there is a general consumer appeal to voice-interfaces. I knew that it would be easier to build something for everyone, rather than just for the blind. And I really think that there is a place for voice-interfaces for general consumers.
What’s the one thing you that keeps you up at night?
For a long time I was worried that Apple or someone was going to announce that they were doing exactly what we were doing. But we got over that fear pretty quickly.
“I feel like we’ve done something extremely valuable, something that’s not been done before, something that people will find useful, something that is free…but I still worry that we might not get attention on it.”
It’s worrying…I worry about not being able to get the word out. Like, Yes, Chef! Is a good example. I feel like we’ve done something extremely valuable, something that’s not been done before, something that people will find useful, something that is free…but I still worry that we might not get attention on it. That we might not be able to get people to use it.
Do you think that’s because you have to compete against the noise of some larger players?
Yea. I learned a long time ago that building it doesn’t mean they will come. Field of Dreams is a lie. If you build it they won’t just come.
What is the one thing that you think is going really well?
I think our ace-in-the-hole is that we started working on this two years before anyone else. We have a really big head-start in terms of knowledge and how to design and build voice-interfaces. And I think that’s really valuable. That’s our edge, but whether it’s enough of an edge is still to be determined. There are other players that are quickly catching up.
Do you think it’s because of devices, like Amazon Echo, coming onto the market and gaining some broad appeal?
I think it’s that. It’s also Microsoft boldly claiming that conversation is the next computing platform. There has been a lot of energy invested over the past 6 months into technologies like chatbots and conversational apps. There is a lot of buzz and a lot of hype. But not a lot of breakout applications. The Echo is the one example of a really successful voice-UI, but there still hasn’t been an application for Echo that’s really broken out.
I feel like Yes, Chef! and applications like it could become that break out app. That’s what we are working towards.
So it sounds like you came into launching your startup from a very user-experience focused place. You had the knowledge that there exists this large experience problem with voice-UIs, and, in general, assistive technologies suck. How has that influenced the decisions you made and continue to make for you business?
I like to think that we come from a very design-first perspective when it comes to creating and building our products. We started, like you said, with a clear idea of what the larger design problem is around voice-UIs. But the challenge we then faced was, “What is the best use-case where fixing this design problem will provide the most value?”
So at first, we decided to try to tackle shopping. In talking with the blind community, issues with shopping is the #2 problem. The #1 problem is just general getting around and transportation. But shopping is a close second, and that was a problem we could develop a solution for. Also, by facilitating a purchase we saw, from the business side, that could be a way for us to make money.
It turns out SayShopping didn’t work out exactly how we wanted.
Why do you think that was?
I think that in our conversations with blind and low-vision users, when they were talking about shopping they were talking about in-store shopping. Like navigating the grocery store and making purchases there. The app we built was for online shopping. Those are two distinctly different experiences. There were also technical limitations that forced us to work within a very narrow scope.
So that was an interesting experience in that, you went down this path and it didn’t turn out quite the way you had hoped. What do you think you’ve learned from that?
So we learned that voice was better, but all the best practices that exist for visual and graphical design simply aren’t there for voice applications. So in building SayShopping we discovered a lot of those best practices. We learned a lot about what the technology has to do when you can’t immediately look at the screen and see what happened. We got a lot of things wrong. But ultimately we learned how to make a smooth experience when voice is the principal modality.
How did that experience change your strategies for the company?
Honestly, the failure of SayShopping didn’t change our strategies too much. But we decided to take everything we learned from that experience and make it available to other developers and designers.
It really took Microsoft announcing their BotBuilderSDK for us to step-back and reevaluate our strategy. When we first started, mobile was the only platform for which you could build a voice experience. But now, you have Amazon Echo, and Google Home. So we took a step back and really evaluated “what are the core technologies?”, “what are the core platforms?”, “what are the industry use cases?” and created lots of lists with the pros and cons of each scenario. And we realized that mobile isn’t necessarily the only platform we should be looking at. We decided to shift our focus to Echo, and facilitate the design and development process on the Echo.
Do you ever feel that there is a conflict when you think about making the best possible decision for the user experience of a product versus thinking about making the best possible decision for the business?
Hmmm… not yet. Not so far. I think the biggest point of friction isn’t that, trying to make money is directly counter to the user experience. But rather, the realities of some of these platforms make it difficult to build the absolute best use-case for a voice application. The platforms that exist today don’t allow us to access or facilitate that experience.
On the one hand, that is a symptom of having closed, proprietary systems. On the other, it’s a symptom of it just being too early and the technology is too young. For example, there is growing development and interest in Virtual Reality, and there is a huge use-case for voice-applications there, but the platform technology just isn’t there yet. For a startup, there is just too much risk there. Though I would love to work on and solve that problem. Virtual Reality should be accessible too.
Any parting thoughts about what it’s like to be CEO of a start-up?
If you create value for someone, and create a good product, that should be all you have to do. But that is not the reality. It’s marrying that proposition of creating real value in the world with finding a way to capture that value.