First...source of rumor? Did they speculate about how it would be enforced?
From Nature Magazine:
Bryant Walker Smith, a law professor at the University of South Carolina in Columbia, is sceptical that the Moral Machine survey will have any practical use. He says that the study is unrealistic because there are few instances in real life in which a vehicle would face a choice between striking two different types of people. “I might as well worry about how automated cars will deal with asteroid strikes,” Walker Smith says.
The "wouldn't buy them" issue may be a moot point as things seem to point to ridesharing as the dominant first wave application of autonomous cars, where people will be renting their use. I've not heard of Uber drivers being interviewed on their preferences for passenger vs. pedestrian safety before people consent to being shuttled around, but I don't get out much.
In 2015, a cyclist in Austin, Texas, confused a Google driverless car when he did a near-motionless “track-stand” at an intersection. The Google car was so bamboozled by the behavior of the balancing cyclist it would not budge.
For my money, an autonomous car reacting to potentially confusing bicycle behavior by stopping seems like a sensible and safe option.
Something like 7000 pedestrians and cyclists were killed in traffic accidents in 2017. My hunch is that something that can't play on its phone, can't drink, can't tend to children in the backseat, and can't have a bad day, while having 360 degree laser ranging and stereo cameras that enable them to see ninja-dressed runners and unlit cyclists while probably speeding less than human drivers will be less likely to hit them.
Disclaimer: I work on autonomous car steering (not the picking the steering part, just the pointing the steering someone else picked part) and my views don't represent my employer.