Vision
Surveillance and AI: where lies the real value?

AI is valuable, if you do it right
11 december 2025
AI has quickly become the hot topic of conversation in our industry. Manufacturers are shouting about it, integrators are confronted with it, and end customers almost automatically expect it. Yet, in practice, we see that the transition from promise to tangible value doesn’t happen automatically. The technology is developing rapidly, but its implementation lags behind. This creates a gap between what customers think AI can do and what a solution actually delivers. Failing to have that conversation effectively risks AI being seen as hype with little added value.
The reality is that AI isn’t hype. But neither is it a feature you simply activate as if it were an additional resolution setting. It’s a specialized field that requires knowledge, context, and experience. Only then will AI become a valuable tool.
The market wants AI, but doesn’t fully understand it yet
Recent market studies show that AI is seen as a key strategic trend in video surveillance. At the same time, adoption is still in its early stages. Many organizations see the benefits on paper but struggle with how to apply AI in their own environments. Expectations are rising faster than market knowledge can keep pace. This puts pressure on integrators, while end customers sometimes assume every camera is automatically smart.
In many implementations, AI remains stuck at basic tasks like detecting people or vehicles. This is useful, but it doesn’t represent the level of intelligence customers envision. Object detection isn’t behavior. It’s not context. It’s not prevention either. It simply generates more alerts that still require manual review.
True power only emerges when AI understands what’s happening. This requires models that can analyze intent and context, and specialists who know how to integrate this into an operational workflow. Without that expertise, AI becomes nothing more than a checkbox.
Technology without expertise leads to disappointment
There’s a persistent misconception that AI always works, regardless of the environment. In reality, every project is unique. A model that functions perfectly in a controlled environment is useless in a busy or dynamic setting. Light, reflections, seasonal influences, behavioral patterns, risk profiles, and even culture affect the quality of AI results.
This is precisely why many projects end in disappointment. Not because the technology is poor, but because choices are made without sufficient knowledge of the scenario. Selecting the right models, tuning thresholds, linking them to workflows, and assessing reliability requires expertise. Without this step, AI remains nothing more than a buzzword.
The legal and ethical context increases the challenge
Because Europe has a strong legal framework, AI should never be considered in isolation from regulations like GDPR, NIS2, and the AI Act. These rules obligate organizations to handle data responsibly, to provide transparency, and to be able to explain their decisions. This is necessary, but it also means that poorly designed AI implementations immediately pose risks in terms of privacy, security, and compliance.
This, too, requires expertise. AI in security isn’t just about what’s technically feasible, but also about what’s responsible, explainable, and future-proof.
Where AI does add value
When applied correctly, AI can deliver something traditional security systems never could: insight. Not just seeing what’s happening, but understanding why it’s happening. Not just responding to incidents, but anticipating them.
AI can recognize patterns invisible to humans. It can flag anomalies in behavior instead of just movement. It can support operators by filtering out the noise and only providing alerts that matter. And it can accelerate forensic investigations by answering natural language questions instead of reviewing hours of footage.
But only if the technology is guided by expertise.
The IDIS position: AI is valuable, if you do it right
At IDIS, we view AI not as a product but as a field of expertise. We recognize that the market is changing rapidly, but at the same time, we see that many organizations need guidance, explanation, and realistic expectations. In our conversations with partners, we notice that they are looking for a sparring partner to help them make choices that align with their customers’ operational realities.
Therefore, we don’t position AI as a magic bullet, but as a tool that only works with the right expertise. We help integrators determine which technology is suitable for which environment, how to make a model reliable, how to connect it to security processes, and how to comply with European regulations. This is not for commercial reasons, but because a well-functioning solution ultimately strengthens everyone’s reputation.
AI will become a standard component of modern security. The difference between success and failure will soon lie not in the hardware or the software, but in the knowledge with which it is applied. That knowledge is the key to value.
Conclusion: from hype to craftsmanship
AI in security isn’t hype. It’s a powerful development that will structurally change our sector. But it’s also not a panacea. Value only arises when technology is combined with an understanding of risks, behavior, processes, and regulations.
Our role as a sector is clear. We must help end customers by being honest about their possibilities and limitations. We must support integrators in acquiring knowledge. And we must approach AI as a profession, not a function.
