In "Future? Tense!" in his essay collection From Earth to Heaven (1965), Isaac Asimov wrote about the nature of science fiction. He noted that science-fictioneers are stereotyped either as indulgers in weird fantasies or as farsighted predictors of the future. After discussing some SFers' successful, if limited, predictions, he notes: Not the technological advance, but what would happen if it became common. He himself had proposed that his robots would have "positronic brains", from having recently learned about the positron, a sort of mirror-image of the electron. However, when a positron runs into an electron, the two particles disappear into two or three very energetic gamma rays, with the combined energy the an electron would get from a million-volt battery. A positronic brain would quickly fry itself. But that was not the point -- the point is what would happen if artificial-intelligent systems became common. IA had became annoyed at all the SF stories about robots destroying their creators, with the implication that we were not meant to create such entities. He knew that many tools have various safety mechanisms, and wouldn't AI systems also need them? Thus, his Three Laws of Robotics, which I rephrase as follows: 1. An AI system may not injure a human being or, through inaction, allow a human being to come to harm. 2. An AI system must obey any orders given to it by human beings, except where such orders would conflict with the First Law. 3. An AI system must protect its own existence as long as such protection does not conflict with the First or Second Law. Likewise, if automatic-driving cars became feasible, what would become of manual driving? IA once wrote a story, "Sally", in which manual driving had been outlawed as needlessly dangerous. IA imagined what SFers might have written about cars back in 1880. IA didn't name names of SF stories like that, but that reminds me of the "treknobabble" in some Star Trek episodes. Or, Lots of visual-media SF is similarly absurd about its spaceships, making them seem too much like Earthbound vehicles. He also considered what social changes cars would make possible if they could be mass-produced in the millions, and at prices low enough for just about anyone to buy them. Wouldn't people move outward and create suburbs? Etc. H.G. Wells predicted several such things in his 1901 book Anticipations of the Reaction of Mechanical and Scientific Progress: Upon Human Life and Thought, and IA thought of something that even HGW didn't think of. When people commute to cities, they will have to have some place to leave their cars, and he imagines: The title: "Crunch!" That seems like painful reality today, but maybe not in 1880, when he thought that that could have alerted at least some policymakers about the problems of a superabundance of cars. IA also noted a prediction of the Cold-War nuclear stalemate. Robert A. Heinlein wrote "Solution Unsatisfactory" under the name Anson MacDonald back in 1941. Although he imagined radioactive dust rather than nuclear bombs, the essential outcome was the same. He imagined his hero asking if the US could continue to have a monopoly on radioactive-dust making, because someone elsewhere will sooner or later reinvent it. This would result in an all-offense-no-defense stalemate, with every dust-possessing nation dependent on the goodwill of every other one.