This article is a bit all over the place in arguing about UBI. That’s understandable because there is no universal idea about UBI is. That said, it might have been more readable if you nailed down what version of UBI you are talking about at the start.
You’ve also updated the article with statements about the timeframe you are referring to, but it seem to be based on philosophers like Sam Harris, whom you cite.
AI is one of the factors that will impact livelihoods in the relatively near term and long term. UBI, or something like it, will probably be necessary long before the timeframe you consider. At the stage you are talking about, it will be shaped by versions that have come before.
Harris is talking about a philosophical/theoretical conclusion when taken to its limits. Harris’s understanding of the technology is limited. The end of humanity from asteroid strike may be more likely to happen before AI and robots make human work completely obsolete.
There are some aspects of being human that no one has a clue how to implement today. It is in the realm of anti-gravity machines. Harris is a neuroscientist and philosopher. You’re trying to have a practical argument about UBI at a very distant point in time. Leave it to our great-great-grandchildren to worry about implementing practical solutions in the timeframe you are discussing. By the time this comes, the world will already be unimaginably different. For now, it is a philosophical/theoretical discussion for the likes of Sam Harris.