I'm trying to implement my own home automation infrastructure, and at the moment I'm able to vocally interact with some self-made devices with a flow like this:
Voice => GooogleHomeDevice -> IFTTT.COM-Applet -> IO.ADAFRUIT.COM-Feed -> ESP32(MQTT) => Device
Due to some limitations of the IFTTT/IO.ADAFRUIT nodes, I would like to switch to this kind of flow:
Voice => GooogleHomeDevice -> (SOMETHING) -> GC-Functions -> GC-PubSub -> ESP32(MQTT) => Device
The (SOMETHING) I need is a functionality that allows me to provide a Google Cloud Function with my vocal commands in text format so that GCF make them available to another service (GC PubSub), the same way IFTTT.COM-Applet does to IO.ADAFRUIT.COM-Feed.
The way IFTTT.COM service performs the task looks quite straightforward since it needs "only" to use my Google account to intercept my interactions (I know much can be hidden under the hood).
I've been searching for alternative solutions, but as for now all I've found involves a complex interaction of many components (Google Assistant, Actions on Google, Firebase, ...).
Before I start building something so complicated, I would like to know how the only "simple task" of capturing my vocal commands can be achieved (possibly not using features other than GCP ones).
Thank you.
question from:
https://stackoverflow.com/questions/65886625/how-to-emulate-ifttt-functionalities 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…