I wonder if it would actually materialise, consisting the recent case where an airline company's AI chatbot promised a refund that didn't exist, but were expected to uphold that promise.
That risk of the bot offering something to the customer when the company would rather they not, might be too much.
It seems more likely that companies will either have someone monitoring it, and ready to cut the bot off if it goes against policy, or they'll just use a generated voice for a text interface that the client writes into, so they don't have that risk, and can pack more customers per agent at a time in.