Complexicity

“The future of AI is not about replacing humans, it’s about augmenting human capabilities.”
— Sundar Pichai, CEO of Google

Last week (4 November), I spoke at the 2025 Incheon Global AVSEC Seminar in South Korea. Such seminars can often be self-congratulatory, as speakers take the opportunity to highlight the commendable progress made in AVSEC, including the benefits of the latest technology.

I chose a different approach, speaking under the title, "It's not about the machine, it's about people." While recognising many benefits of the technology that is coming into play to facilitate passengers and integrate security into a seamless flow through airports, I also offered a word of caution.

Imagine a future where walking through an airport security check is a seamless experience, without the need to remove your shoes, liquids, or laptop. A system that not only recognises you but also values your time and safety. A future where human intuition and machine intelligence work in harmony, creating a system that is not only safer and more efficient but also less intrusive. This is the potential of auto-clear.

Nevertheless, the systems we are deploying are becoming more complex. Sometimes, our capacity to manage the flow of information as security operators becomes overwhelming. Consequently, under time constraints and swiftly changing situations, we may make poor decisions. These decisions, in turn, could exacerbate the problem.

I cited the 2018 Gatwick drone saga as an example. I've discussed this event in some detail in an earlier blog. I could have chosen to use the 2013 LAX shooting, an incident that lasted less than ten minutes but caused disruption for days.

Meanwhile, airport security leaders are under pressure to deliver faster, more accurate screening, often with limited resources and a mix of technologies from different suppliers, including legacy systems.

When developing security systems, it's essential to recognise that we don't always fully grasp every aspect, particularly the processes that govern our interactions with machines (computers and AI). This is where human oversight becomes increasingly important. We must question whether we truly understand how humans — our thinking, reactions, and the way we process the information provided — can lead to optimal outcomes. This understanding and supervision can offer reassurance and confidence amid complexity. 

And if we don't, it could disrupt operations due to poor judgment.

Nonetheless, we need to recognise the limitations of the technology. To begin with, the machines cannot comprehend context and nuance. A perimeter detection system alerts you when a passenger enters a restricted area. Still, it can't differentiate between humans or identify an autistic person who does not recognise the rules we impose. Or full body scanners that frequently detect black people because their thick hair or hair covering confuses the system. 

Always remember that machines are trained within specific parameters we supply. If you train AI models with light skin tones or Asian features, they may label others as suspects. 

Similarly, our security staff have limitations - we understand that. They become tired and have good and bad days. Attention and vigilance diminish, which is why we already limit the amount of time screeners can operate. Humans are poor at maintaining focus during long, repetitive tasks (like analysing X-ray images). Machines do not get bored or tired.

Humans also have cognitive biases. Therefore, when dispatched to search for drones at night, people during the Gatwick incident interpreted any lights in the sky as drones. Humans are also vulnerable to the "Cry Wolf" effect, where too many false alarms from a machine cause operators to begin ignoring or dismissing its alerts, including genuine ones.

At the other extreme, operators may blindly trust the machine while ignoring their common sense. Alongside this is information overload, which hampers our ability to discern the truth of events. Once again, in the Gatwick incident, not a single CCTV recording or witness account provided any image to substantiate their sighting claims. 

How should we manage these complex systems? The solution is slowly emerging - but it will take time, and we will make mistakes. 

We might have a clue within our industry. While both Airbus and Boeing produce highly advanced and safe aircraft, their underlying philosophies regarding pilot functions and cockpit design highlight a fundamental difference in how they approach automation. 

Boeing adopts a "pilot-in-command" philosophy, where automation functions as a support tool for the pilot. The pilot exerts direct control inputs via the yoke, and the aircraft systems respond to these commands; automation can be disconnected or altered without changing the core flight mode. 

In contrast, Airbus adopts a "pilot-by-wire" philosophy, where the pilot functions as a systems manager, overseeing the aircraft's systems operation. Pilots control flight parameters using a side-stick, and the flight control computers then determine the most effective way to meet them.

Considering our increasingly complex security system, we must evaluate what is optimal regarding security outcomes, facilitation, and cost. Moreover, these challenges are not new, but ask yourself: Are you being thorough in testing your people and system?

The solution to these challenges lies in thorough and honest testing of systems. We must be proactive in our approach, beginning with tabletop exercises. By using scenario planning, red teaming, and efforts to understand uncertainty, a tabletop exercise can start the testing process.

Tabletop exercises should then be followed by real-time on-the-ground drills that present increasingly challenging scenarios, including worst-case options. Relying solely on simple, highly scripted rote exercises each time will not uncover all the flaws. 

All these efforts can be recorded in a risk register, which details the most significant risks and the actions needed to manage them. Regular reviews of the risk register should be included as part of any contingency planning. 

Today, we stand at a pivotal moment in aviation history, where the convergence of artificial intelligence, advanced sensors, and data analytics is fundamentally transforming the way we safeguard the global travelling public.

However, I ask again: Are you operating the system, or is the system operating you?

Steve Wordsworth