I think that by now AI should be part of rudimentary education (not the technical details, but the gist and scope). Many people don't seem to realize the role it has in modern societies and economies. News, politics, the stock market, social media, advertising, the internet; by now it's all driven mostly by AI rather than people. It's the reason why companies are trying to collect as much information about you as possible. You could say that it offers the power to have a customized Sith Lord for every citizen (per say). That is, manipulation is customized, as AI can utilize each persons situation and psychological weaknesses to optimize the effect it is able to have on your behavior. That's the big business aspect. And this is the thing everybody should understand so that we can have a conversation about the ethics and impacts. It should be combined with education in critical thinking and ethics. Everyone should know about what/who is trying to manipulate them and how.
Besides the role of AI in manipulating human behavior, advancements in autonomous robotics is set to further transform a number of areas: mining, war, espionage, space, manufacturing, farming, etc. The specifics about exactly how and when might be slightly uncertain, but overall and in general it's pretty clear and simple. The main factors that would change things are human intervention to regulate how AI is used, and competition between groups of people for control and domination. Besides that, if you look at the incentives and what is possible, you can get a good idea of what the future is likely to look like.
Personally, I think space is the big one. Modern AI is just about at the level where many of the key breakthroughs, envisioned from the beginning by people such as Von Neumann, are feasible. This includes interstellar space missions, massive industries in outer space, terraforming, etc. How far out these things are, is not clear. They currently still require extensive human input in terms of design and engineering. But at some level of achievement these things could be ramped up in scale enormously. We could, for example, launch a single automated mission to send probes to millions of stars, and then millions of probes from each of those million stars, and so on.
If you go further out, you can expect a time when science, mathematics, and engineering are also dominated by AI. In that case, it is relevant to wonder what role the human being has. The AI will develop insights, construct proofs, record observations, do analysis, pose new questions, maintain an awareness of the state of the art, etc. It will share this information in a distributed way in some non-human readable form. People would, by default, have little clues what is going on, but will notice improvements in technologies. We will likely act as managers giving approval on high level projects, while balancing trying to micromanage things we don't understand. Efforts will be made to figure out how to improve communication between AI and people, so that we can understand as much as possible in terms of what they are learning and doing, and participate as much as possible in decision making. Many proofs, analytic functions, and rationals, will be too large and complex to fit in human memory in order to be understood.