Imagine that you had a hobby of playing around with technology in your house, and then you find a way to turn your enthusiasm into a business. That’s exactly what Lauren Golembiewski, and her partner in life and business Matt Buck, did. They combined their love for voice interaction and for music into creating Voxable, a Texas-based company which helps companies and businesses create their own voice and chat bots. Why did she think that her bot “formed a mind of its own”? And how playing banjo, or any other instrument, can be easier with Voice tech? That’s among the things we talked about.
• HOW DID YOU GET INTO CONVERSATION DESIGN?
My background is in typical UX/UI design for web and mobile applications. And my partner in business and life Matt Buck, he’s in engineering and software development. Before all of this voice really hit and got big, we were just playing around in our homes with different home automation devices. We were turning on the TV and controlling the lights.
We got a hold of an Ubi smart speaker, which was the predecessor to the Echo. Then we were able to program our own voice interactions in our home, and we just found it like really really magical. It was so fun to enable our environment with these home automation devices. When the Alexa skills kit came out, we saw how big companies would start to kind of really get behind this technology. So we quit our jobs and started Voxable – we wanted to make these devices and these interactions more and more. So we kind of got into it just by being hobbyists and turned it into a business.
• What parallels do you see between your previous occupation – UX/Product design – and conversation design?
There are a lot of parallels. I used a ton of my UX design and UI design background in conversational design. There’s a lot that you can extract from those processes in UX design and software development and how to learn all this new technology coming into conversational design. The underlying process of understanding users and trying to create a software that really benefits them is the same.
And so, the affordances, tools and the things that you’re using to build the software are different. Learning that – especially having an engineer software developer next to me – made that much easier. However, I was able to kind of take that underlying process and underlying knowledge I already had and apply it to conversational design process, which is similar to creating any other web or mobile application. It’s just that the mode of input that users are interacting with is voice and that changes a lot of things but it doesn’t really necessarily change the fact that you’re still building a software application.
• WHAT’S THE bot or SKILL YOU’RE MOST PROUD OF?
I’m really proud of our work with South by Southwest and their intelligent assistant Abby. It was a really big project with a lot of different data points. Then, the data that we are working with when you’re thinking about a big tech and music conference is names of bands, names of places and venues and all of those can be linguistically similar. Because a band’s name can be whatever the band wants it to be, often times there’s names of the people who are speakers, but those people are also attached to bands or other projects.
So disambiguating all of the underlying entities or important data points in that system was a big challenge. Then, although the event didn’t take place this year and we’re not sure when it will take place again – I’m really proud of the work that we’ve done over the past few years with them.
Aside from that client work – I’m also really proud of our introduction to conversational design course. A lot of the work that we do with companies is help teaching them how to implement these conversational design and development processes. So, we distilled that down and made it accessible to anyone who wants to learn this online. To sum it up, I’m really proud of that work. Although it was a lot more work than we anticipated, we got it done and it’s out there.
💬 Previously on Taking Turns 💬
Michelle Zhou: “Humans talking to machines are brutally honest”
Mary Tomasso: “Don’t just write a conversation – speak it”
Michelle Parayil: “Bad copy can ruin a customer’s day”
Henry Ginsburg: “Want to get in? Grab a pen and start writing”
Kent Morita: “In the right context, humor can be very effective”
Breakup, Pokemon and YASS!: Greg Bennett talks convo design
Hillary Black: “Chatbots are like Social Media on its early days”
Every Word Matters: Language lessons with Maaike Coppens
Thorben Stemann: “Users asked my bot for her picture”
Emiel Langeberg: “Voice Tech can be also a research tool”
Rebecca Evanhoe: “Context is the most important thing for voice“
• It’s hard to miss that one of your passions is music. How can conversational AI help in the process of, for example, learning a new instrument?
I started learning banjo about a year ago, It was my very first instrument ever. I love music and listening to it, but I never really felt I had what it takes to learn music. It just wasn’t something that I was around as a youngster. My partner Matt is a musician since he was very young, and he’s constantly been trying to pull me in. It’s one of his passions. And it wasn’t until he bought a banjo and I picked it up and I was like, ‘I want to learn this’.
So I’ve been taking online course lessons, and watching lots of YouTube. And I discovered something very important: you can learn music. It is not an innate talent. It is a skill that you can pick up and even in your 30s like me. Thus, I started learning music and, especially the banjo, and I realized that there’s so much opportunity for voice interaction in these different learning platforms. At least in banjo – and I know in guitar because I started picking up the learning electric guitar a little bit this past year – a lot of times you’re learning chord shapes. Then you’re trying to figure out the shape that your hands need to be in. And you forget what fret you’re supposed to be fretting or what string, what the exact position is.
When side projects are the best projects
And I just keep thinking like, oh, I have an instrument in my hand. I don’t want to have to like go pick up my phone that has the chord diagram on it. So, I want to just be able to ask my smart speaker, which is also sitting right in front of me. Then, I think there’s a lot of opportunity in music education to have this voice interaction. Moreover, it is something that I felt was really natural when I’m sitting there learning an instrument. Then, the Ultimate Guitar Tabs application does have some voice control built into the mobile app. However, it’s not quite like a learning experience.
And so I just keep thinking of – and Matt and I keep devising all these side projects which of course, the best side projects on top of our regular job. However, we really just want to try to make something that is for ourselves. Tinkering around, being excited about the things that we’re engaging in, whether it be music or home automation. We’re just always interested in making our own environment’s voice enabled – and so music learning, which is just the latest one that I’m really excited about enabling a voice interaction around that because I know it’d help me a lot. So that’s you know, well, I want to build it.
• WHAT’S THE ONE THING EVERY VOICE ASSISTANT MUST HAVE, IN YOUR OPINION?
The most important step – and I’ve seen this in clients – is really having solid user research. So user research is just that activity of understanding what your users want and need. But in voice interfaces, it’s especially important. Users can really say anything, there are given open slot to speak their tone.
Of course, we know that users will have certain goals and will express those goals in certain ways. Then, technology only can do some things, especially if you’re building an application on top of an existing products. Likewise, if you have an underlying API it does certain things, but you need to really understanding the way that your users will verbalize those problems and goals, as well as all other ancillary questions they’re ask. For example, in music interaction, they may ask for a chord diagram, or – okay, ‘what scale it is that it’? They may have other follow-on questions that you may not anticipate if you’re just thinking about what your application can provide.
If companies don’t invest in it upfront, their first version of their voice interface will be a mechanism to gather that research. So they’re essentially creating their own experiment. Thus, that can be tough for both the company and their expectations on performance as well as the user, and their expectations about what they’re supposed to be getting out of it. So I just always caution companies: if you’re not going to find this out in the beginning, it’ll happen along the way. It could be your strategy, but be aware. You do need to uncover truly what exact unique words your users will bring to your voice interface.
CoCo & Co Content reCommendations
- How to build a bot using the four-turn template?
- What is deep context switching in Conversational AI?
- How can Conversational AI help in the fight against the Coronavirus?
- The Feedback Loop: A basic chatbot development process
• TELL US ABOUT AN AWKWARD, AMAZING OR SURPRISING THING THAT HAPPENED TO YOU DURING YOUR WORK.
This happens when I’m really deep on a project and testing it. I’m going through and trying to make sure that all the different functionality is there. Then, what ends up happening is this: I’m interacting with the bot and I get something that I don’t expect or wasn’t necessarily aware was going to happen. Then, I feel like – oh, did this thing formed a mind of its own?
It’s that uncanny valley of just ‘oh, wow, I didn’t expect that’. I’m the one who wrote it but I still fool myself in the midst of interacting with them. Therefore, you lose that sense that you’re really interacting with a kind of flat, stale piece of technology. Bots voice interface really feel alive.
And so even fools me in the midst of testing that I’m like ‘did we program that in, or where was that variation of response…’. I wasn’t expecting it, and inevitably it is there somewhere and the bot isn’t having a mind of its own. It’s funny and I see that with users as well. We’re all creating this very alive technology, which has great power and great responsibilities attached to it. So, we do it in order to not exploit that type of connection that I and users find ourselves in, but it does happen to me and I’m always surprised.
• WHAT TIPS CAN YOU GIVE TO ASPIRING CONVERSATIONAL DESIGNERS, OR TO PEOPLE WHO WANTS TO JOIN THIS PROFESSION?
If you have roots in kind of typical web design or mobile app design, start there as your foundation. Don’t try to undo everything you’ve learned, and that can provide a really strong foundation. Like I said, user research is really important. Also, understanding your users and forming those use cases in a way that makes sense to the developers and to the rest of the team is really important. Then finally, applying a design process really strongly considering the way that the interactions will play out inside of the voice or chat interface. To me, it really maps a long typical design process.
If you don’t come from a design background, that’s okay. I would say start to understand and learn as much as you can about the existing technology. So a lot of the platforms like Alexa or Google assistant have documentation that tells you what those platforms can do and can’t do. While that documentation is really extensive, and you can get lost in it, a lot of them have created design-focused areas of their documents. Those ones help design it very specifically, understand that technology. And then if you can work with or become friends with or marry a developer in this space, that close collaboration and relationship with a developer who can help to level-set you on what is possible, but also how easy it is to achieve. I think it was really important.
“Find use cases that really speak to you”
And then, if you don’t have a developer, you can go out and use tools that are out on the market to help you understand what is available on those platforms. So, Voiceflow has a really good integration with Alexa and a lot of the tools. Similarly, BotSociety has a lot of tools too, and I think they’re a little bit more geared towards Google Assistant. Thus, those tools can help you create the prototypes and start to work with the underlying technology. Also, it gives you kind of the GUI interface over top of all of the things that Alexa or Google can do. From then, just try to get your feet and play around and create something.
And then, like I do, find those use cases that really speak to you, that you really want to see happen in a voice interface. Even if it’s as simple as controlling some device that you have. Because it’s not until you apply something that seems kind of simple like it’s one command, but then you realize – oh, well, what if I say this or my internet connection is poor or that device doesn’t respond. Indeed, all those situations that can happen, even on one interaction.
And really, breaking that down and getting something to work and then showing it to other people will really exhibit, all of the nuance that is within a conversation interaction. So, If you have underlying knowledge, use that as your foundation. And if you don’t – try to get it. Try to understand the technology as best you can, what is possible and what will work.
• Tell us more about VOXABLE.
Voxable is a conversational design and development agency. We help companies build their conversational interfaces chatbot and voice interfaces. We also do a lot of workshops teaching companies underlying process for conversational design & development. So whether they have a design team and a development team, or they have copywriters in the development team or whether it’s just development team – we help them understand: here are the things that you need to have in place, the documentation. Or, here’s how to create a natural language understanding model, which is probably one of the core pieces of technology that is the most difficult for teams to assess and understand as they add conversational design and development to their wheelhouse.
Then, we built an online course for anyone to learn those same skills that we teach internally to companies but to do it self-paced and as like a solo learner. And thus, we’re working on some really exciting things as well.
“The Figma of Conversational Design”
For example, we’re creating a conversational design tool. We think that there’s a big gap in the marketplace for designers to have a tool that supports the conversation design process. So we’re creating like the Figma for conversational design, and we’ll be talking more about that this fall & winter. To sum it up, that’s something that I’m really excited about. Again, I find that there’s a huge gap for me.
I don’t really have any tool that helps me write sample scripts and build conversation flows other than kind of smart whiteboards. So, we’re getting beyond the whiteboard/Google Docs, to something that really aides a designer in the conversational design process. Therefore, I think tooling is a really important piece of the puzzle for any designer and especially developers. We’re building a tool that designers can actually use to support their process, that is focused on the exploration of a conversational design the and is unencumbered by implementation. So, it’s agnostic of any platform. It’s called Voxable studio. So we will be talking about it more and more as the year goes on.
Next week, we’ll have episode 3 of Coming To Terms with AI! In the meanwhile – subscribe to our YouTube channel | Join our Discord community | Sign up for our newsletter | Follow us on Facebook, LinkedIn, Instagram or Twitter