How awesome would it be to be able to build your own smart device? You could be able to instruct it to do exactly what you need it to do, pair it to whatever you want, have full control over it, and so much more.
But how do you actually make it understand you? Oral communication is not something that comes with ease to a machine. And there are so many different ways to express the same question! Since it's 2019, remote controls shouldn't be the answer, don't you agree? Your device should be voice-triggered and understand which actions you're trying to perform.
In this talk, we will explore how natural language understanding and processing works, how we can use DialogFlow for this, how we can build a smart home device for the Google Assistant by using Actions on Google and, finally, how we can create our own voice activation by using TensorFlow.