In one case, I was discussing license plate recognition (LPR). It seems that when it is used on-street, say, for enforcement, there is no real way to tell if you are getting good reads unless you manually check every one.
Let’s face it: If the LPR system logs a plate and misreads it, if that plate is on a “boot list,” it won’t show up and who will know? That may not be a problem, but what if the opposite occurs? Whoops.
However, if you run an LPR system at an airport, and you run a manual license plate inventory (LPI) system at the same airport, you know immediately how many misreads you get. Want to guess the number? (I won’t embarrass anyone by telling you which airport.) It’s 25% misread or not read.
And how about in-street sensors. It’s the same issue — how do you know if you get misreads? You have to go out there with a clipboard and a stopwatch and compare the actual, visual activity with the data received by the system. Just who does that?
However, if the system is used to generate citations and it makes mistakes, the folks that get the bogus tickets will be standing on your desk. If you issue 500 such citations a day and there is a 2% error rate, that’s 50 complaints a week. Can you handle that? Well, I think a 98% “good” read rate is pretty good, but it’s not good enough for one city I spoke to.
What happens when a system goes off-line but continues to collect credit card data at the pay-on-foot or exit lane and then downloads that data when it is up again? Is any of that data lost? What about verifying whether the cards are valid? How often does it happen? Do we know when it does? How would we know? The only way is, when a system goes down is to shut down and not let the gates open. Do we do that?
In every case, we are relying on technology – and often broken technology – to tell us if it is broken.
Look at it this way — Bill Gates spends billions to debug Windows 8. Tens of thousands beta test it. When it’s released, it’s full of bugs, and almost daily we get an update from Seattle telling us that a “patch” has been installed to fix this or that problem.
Even then, we know that when we do certain things, it doesn’t work. But quietly, the system teaches us that when we do a particular command, it doesn’t work, so we simply don’t do that anymore. We know that “ctrl alt delete” will bring up a screen that can get us out of almost anything.
The technology we use in parking is beta-tested at, what, 10 sites. Then it’s put in the field and debugged on the fly. It’s the life that we in the parking world live when dealing with technology.
My discussion with the airport brought out a scary thought. “I really don’t want to go down that path,” my airport friend said. “I really don’t want to know.”
He was referring to the administrative hassle it would cause him if he truly investigated the problems in his system and had to take the action necessary to “fix” them.
All of my discussions led to a conclusion: Technology problems can be minimized with attention in three areas.
First, the user must want the system to work and constantly strive to make it work. You would be surprised how many users just don’t care. They know the “ctl alt delete” for their system and use work-arounds.
Second, the system must be right for the job. Don’t expect to shoot “Star Wars” on your $350 video camera.
Third, the technology must be installed and maintained by organizations that know what they are doing. Don’t expect the average journeyman electrician to be able to get your $2 million technological marvel up and running, and don’t expect the local “Geek Squad” to keep it running.
Then, you might get to the 2% I discussed above. It that’s not good enough, then perhaps you need to reevaluate your expectations, or perhaps you need to look into something different to keep your business running well.
JVH
PS: There is technology that works right. But it often runs up against one of the three requirements listed above. My mantra is everything works, mostly, and nothing works. Depends on the rules.
In one case, I was discussing license plate recognition (LPR). It seems that when it is used on-street, say, for enforcement, there is no real way to tell if you are getting good reads unless you manually check every one.
Let’s face it: If the LPR system logs a plate and misreads it, if that plate is on a “boot list,” it won’t show up and who will know? That may not be a problem, but what if the opposite occurs? Whoops.
However, if you run an LPR system at an airport, and you run a manual license plate inventory (LPI) system at the same airport, you know immediately how many misreads you get. Want to guess the number? (I won’t embarrass anyone by telling you which airport.) It’s 25% misread or not read.
And how about in-street sensors. It’s the same issue — how do you know if you get misreads? You have to go out there with a clipboard and a stopwatch and compare the actual, visual activity with the data received by the system. Just who does that?
However, if the system is used to generate citations and it makes mistakes, the folks that get the bogus tickets will be standing on your desk. If you issue 500 such citations a day and there is a 2% error rate, that’s 50 complaints a week. Can you handle that? Well, I think a 98% “good” read rate is pretty good, but it’s not good enough for one city I spoke to.
What happens when a system goes off-line but continues to collect credit card data at the pay-on-foot or exit lane and then downloads that data when it is up again? Is any of that data lost? What about verifying whether the cards are valid? How often does it happen? Do we know when it does? How would we know? The only way is, when a system goes down is to shut down and not let the gates open. Do we do that?
In every case, we are relying on technology – and often broken technology – to tell us if it is broken.
Look at it this way — Bill Gates spends billions to debug Windows 8. Tens of thousands beta test it. When it’s released, it’s full of bugs, and almost daily we get an update from Seattle telling us that a “patch” has been installed to fix this or that problem.
Even then, we know that when we do certain things, it doesn’t work. But quietly, the system teaches us that when we do a particular command, it doesn’t work, so we simply don’t do that anymore. We know that “ctrl alt delete” will bring up a screen that can get us out of almost anything.
The technology we use in parking is beta-tested at, what, 10 sites. Then it’s put in the field and debugged on the fly. It’s the life that we in the parking world live when dealing with technology.
My discussion with the airport brought out a scary thought. “I really don’t want to go down that path,” my airport friend said. “I really don’t want to know.”
He was referring to the administrative hassle it would cause him if he truly investigated the problems in his system and had to take the action necessary to “fix” them.
All of my discussions led to a conclusion: Technology problems can be minimized with attention in three areas.
First, the user must want the system to work and constantly strive to make it work. You would be surprised how many users just don’t care. They know the “ctl alt delete” for their system and use work-arounds.
Second, the system must be right for the job. Don’t expect to shoot “Star Wars” on your $350 video camera.
Third, the technology must be installed and maintained by organizations that know what they are doing. Don’t expect the average journeyman electrician to be able to get your $2 million technological marvel up and running, and don’t expect the local “Geek Squad” to keep it running.
Then, you might get to the 2% I discussed above. It that’s not good enough, then perhaps you need to reevaluate your expectations, or perhaps you need to look into something different to keep your business running well.
JVH
PS: There is technology that works right. But it often runs up against one of the three requirements listed above. My mantra is everything works, mostly, and nothing works. Depends on the rules.