By Katherine Beaty
Many of you may remember the 2004 movie “I, Robot” starring Will Smith and set in Chicago in the year 2035. In this vision of the future, highly intelligent robots fill public service positions and operate under three laws designed to keep humans safe. The movie was inspired by the science fiction of Isaac Asimov, who first introduced the following "Three Laws of Robotics":
- First Law — A robot must not harm a human or, through inaction, allow a human to come to harm.
- Second Law — A robot must obey human orders unless doing so would conflict with the First Law.
- Third Law — A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
What’s fascinating is that the future depicted in this movie isn’t that far off. With the rapid advancements in artificial intelligence (AI), such a world could be within reach in just over a decade — or perhaps even sooner. But are we ready for machines to make decisions for us?
The 5 levels of AI
On July 12, 2024, Seuk-Min Sohn — the author of the book “The Last AI: Of Humanity Climbing the AI Pyramid” — shared a LinkedIn post highlighting the "5 Levels of Artificial Intelligence" framework developed by OpenAI, the company that created the AI chatbot ChatGPT.
This framework outlines the progressive stages of AI development:
Level 1 — chatbots, or basic conversational agents
Level 2 — reasoners, or systems that can solve problems and reason, like ChatGPT or Copilot
Level 3 — agents, or AI capable of acting on a user’s behalf
Level 4 — innovators, or AI that can independently create innovations
Level 5 — organizations, or fully autonomous AI entities capable of running organizations
Where are we now?
If we take this framework as a roadmap, it’s clear we’ve reached Level 1 (chatbots), as conversational AI is now commonplace. Level 2 (reasoners) is also within our grasp, with tools like ChatGPT, Copilot, and Gemini offering reasoning capabilities. These advancements pave the way for Level 3 (agents), where AI can act autonomously on a user’s behalf. We’re already seeing early examples, such as Amazon’s cloud-based voice service platform Alexa setting reminders, timers, and performing simple tasks.
Levels 4 and 5 — where AI independently innovates and organizes — may still be on the horizon, but they are no longer inconceivable.
Deep dive into Level 3: AI agents in parking
AI agents are autonomous or semi-autonomous software programs designed to perform specific tasks, make decisions, or interact with environments to achieve goals. They leverage machine learning, natural language processing, and computer vision to analyze input, process information, and take action. But what could this look like in the parking and mobility industry?
For years, the industry has pursued dynamic pricing and rate optimization. AI agents could analyze demand, occupancy, and external factors like local events in real time, adjusting parking rates to maximize revenue and ensure fair pricing. In fact, we’re already beginning to see this with AI-generated rate engines. These systems not only calculate rates from plain-text inputs but also identify gaps in pricing logic and simulate different pricing scenarios for better outcomes.
Imagine a self-parking system that assigns and directs autonomous vehicles to available spaces without human intervention. AI could streamline the "first-last mile" journey by recommending nearby transit or micro-mobility options after a vehicle is parked, helping users complete their trips seamlessly.
AI agents could also transform enforcement. Picture a license plate recognition (LPR) system with advanced AI that identifies repeat violators, alerts them to their outstanding citations before they leave their vehicle, and notifies enforcement officers of the violator’s location in real time.
For facilities with EV charging stations, AI agents could optimize charging by prioritizing vehicles based on need and balancing energy loads during peak times. This would reduce grid stress and ensure availability during busy hours, creating a more efficient and sustainable system for electric vehicles.
Salesforce’s Agentforce: a glimpse into AI agent potential
In September 2024, the software company Salesforce announced the launch of Agentforce, its suite of autonomous AI agents designed to enhance efficiency and improve customer experiences across business functions like service, sales, marketing, and commerce. These agents operate independently, performing tasks and making decisions on behalf of users.
Agentforce is built on the AI-powered decision-making tool Atlas Reasoning Engine, which enables agents to analyze data, make informed decisions, and execute tasks reliably. By leveraging low-code tools, businesses can quickly customize and deploy AI agents tailored to their needs. Agentforce also integrates seamlessly with Salesforce’s existing platforms for customer relationship management, enhancing its utility across various industries.
With real-world applications already being implemented by companies like the online restaurant reservation service OpenTable, the luxury department store chain Saks Fifth Avenue, and the publisher Wiley, Salesforce’s announcement underscores the growing role of AI agents in modern business. It’s not hard to imagine how similar innovations could apply to the parking and mobility sector.
The future is now
The possibilities for AI agents in parking are nearly limitless, from enhancing customer experiences to optimizing operations and sustainability. Although there’s still work to be done to reach Levels 4 and 5 of AI development, the advancements we’re seeing today are already reshaping the way we think about parking and mobility.
Are we ready to let AI agents take the wheel? Only time will tell. But the future is closer than we think.
KATHERINE BEATY is executive vice president of customer experience for TEZ Technology. She can be reached at katherine@teztechnology.com.