The way that “Hey Alexa” or “Hey Google” works is by, like you said, constantly analysing the sounds they said. However, this is only analyzed locally for the specific phrase, and is stored in a circular buffer of a few seconds so it can keep your whole request in memory. If the phrase is not detected, the buffer is constantly overwritten, and nothing is sent to the server. If the phrase is detected, then the whole request is sent to the server where more advanced voice recognition can be done.
You can very easily monitor the traffic from your smart speaker to see if this is true. So far I’ve seen no evidence that this is no longer the common practice, though I’ll admit to not reading the article, so maybe this has changed recently.
It’s been published by multiple sources at this point that this happens because of detected proximity. Basically, they know who you hang out with based on where your phones are, and they know the entire search history of everyone you interact with. Based on this, they can build models to detect how likely you are to be interested in something your friend has looked at before.