Of Race, Humanity, and AI


Of Race, Humanity, and AI

We are getting a lot of reaction to Astrid’s first-person blog post last month on the riots in Los Angeles. I just reread it and I believe the positive reaction is because her post comes from love, not hate and anger. It’s so easy to criticize and show your anger. When I asked Astrid to write the piece, I told her to only write what she saw, not what she heard from others. It’s so easy to get caught up in the agendas that fill us all. She did a great job. Check it out on page 30.

I have been thinking about the maelstrom in which we find ourselves. Is everything truly about race? Is every action we take meant to reflect white privilege and white supremacy? Certainly, that is so if you believe the media and the cancel culture. 

In the past few days, I have come across a couple of blogs and articles that hint at possible racism in parking. I have been reluctant to blog about it because I felt that it was adding fuel to an already roaring fire. I have been reconsidering that position. 

Michael Conner at Kimley Horn has written a thoughtful piece about residential parking permit programs and how they may be construed, even obliquely, as racist and red lining. My immediate reaction was to spike it. Do we really need more articles about race, particularly in the parking industry, for goodness sake? Michael’s article starts like this:

In theory, parking is color blind and unbiased. A parking space doesn’t know the color of your skin, your economic status, or any other personal features that may relate to you.  It doesn’t care if you are a doctor, lawyer, administrative assistant, or customer at a restaurant—all it asks is that you pay the appropriate fee and/or follow the posted restriction. But is parking as equal as we think? Are there elements within the parking industry and in the parking experience that are inherently biased toward one group or another?   

Tony Jordan’s group, the Parking Reform Network, up in Portland, is positing the idea that parking minimums may be inherently racist. To some extent they make low-cost housing more difficult and unattainable to the poorer sectors of our society. You can see where that is going.

Michael thinks we need this discussion. Who knows how it will come out? Maybe we will shine some sunlight into some dark corners of our profession. Or who knows, maybe we will find that rules we make could use some ‘adjustment’ because they are inherently unfair to everyone. 

Unfortunately, the July Issue of PT (this one) was nearly in print when I received Michael’s article, so I’m not able to get it in. I was just able to squeeze this change in “Point of View.” But with August, I hope to begin an ongoing discussion. Michael’s article is fair and open, and I will invite Tony to also be a part of it. I invite everyone else to join in. Let’s talk. 


Have you noticed that we don’t see so many articles these days about how autonomous vehicles will be taking over the automotive industry and that there will be no need for parking in the future? What has happened to the AV industry?

There is an article Astrid put up last month on ParkNews.biz titled “Driverless Cars Show the Limits of Today’s AI” that may explain it. The article, from the Economist, posits that artificial intelligence, particularly in dealing with complex issues like driving, has a long way to go. Problems like dealing with an airplane landing on a highway or someone jumping out in a chicken suit or a stop sign covered with stickers often flummox the computer, but is something a human can deal with without thinking.

Computers learn through the invocation of Moore’s Law. Computing power gets cheaper and faster exponentially. In other words, it processes terabytes of data (which it must do to drive a car) using brute force. It’s not smart, just fast.

In March, Starsky Robotics, a self-driving lorry firm based in San Francisco, closed down. Stefan Seltz-Axmacher, its founder, gave several reasons for its failure. Investors’ interest was already cooling, owing to a run of poorly performing tech-sector IPOs and a recession in the trucking business. His firm’s focus on safety, he wrote, did not go down well with impatient funders, who preferred to see a steady stream of whizzy new features. But the biggest problem was that the technology was simply not up to the job. “Supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to c-3PO [a humanoid robot from the “Star Wars” films]. It’s a sophisticated pattern-matching tool.”

Wow! VC is impatient, tech simply is not up to the job. Who knew?

One study, for instance, found that computer-vision systems were thrown when snow partly obscured lane markings. Another found that a handful of stickers could cause a car to misidentify a “stop” sign as one showing a speed limit of 45mph. Even unobscured objects can baffle computers when seen in unusual orientations: in one paper a motorbike was classified as a parachute or a bobsled. Fixing such issues has proved extremely difficult, says Seltz-Axmacher. “A lot of people thought that filling in the last 10 percent would be harder than the first 90 percent”, he says. “But not that it would be ten thousand times harder.”

This is what happens when people begin to believe their own press. 

Elon Musk has discovered that it’s easier to fire a Tesla into orbit, or shuttle humans to the international space station than to have a Tesla drive itself in a snow storm. The technocrats are finding that wishing it is so much is easier than making it so.

As I have said before, but it bears repeating: self-driving vehicles will first be shuttles in a confined environment, then perhaps long-haul trucking and Grubhub type deliveries, and then taxis in a limited geographic area. Level 5 AV, the ‘Jetson’ solution, is a long time off, if ever, with current technology.


Article contributed by:
John Van Horn
Only show results from:

Recent Articles

Send message to

    We use cookies to monitor our website and support our customers. View our Privacy Policy