How it works
From zero to a myriad of cameras
As described at the introduction page and mission statement, Kerberos.io has a strong vision and roadmap to help anyone on this planet to setup a video management platform to fit its needs. In this section we’ll describe the different building blocks and illustrate how they complement and enrich each other, to build up an ideal deployment model.
As shown below there are 3 critical steps in setting up a video management solution: camera processing, persisting of recordings and analysing. Instead of building a single solution, which many other vendors have build, which covers those 3 functions, we have divided and concur each role and responsibility in a stand-alone solution: Kerberos Agents, Kerberos Factory, Kerberos Vault and Kerberos Hub.
Due to this approach the Kerberos.io solution stack can be scaled and deployed independently. This means you can deploy specific parts on-premise and other parts on a cloud provider, or the other way arround. Next to that it also allows you to only install what you need: start small and grow over time when your business requires it.
The Kerberos Agent
At the foundation of any Kerberos.io deployment you’ll find one or more Kerberos Agents. These Kerberos Agents are installed through various possibilities and are deployed to a compute - VM, baremetal, Kubernetes cluster or other - of choice and connected to camera streams you control.
The Kerberos Agent is responsible for a single camera. It is a piece of software which has two responsibilities: it acts as an user interface (frontend) and an API server (backend). The API processes the video stream, applies computer vision techniques, makes recordings and takes desired actions; e.g. a webhook. On the otherhand, the user interface allows a user to review recordings and configure specific settings for the API.
The Kerberos Agent itself is bundled in a single container, which includes all the dependencies and libraries required to have it running. For each camera, a Kerberos Agent container will be created.
The appealing thing with this approach is that you’ll experience complete isolation. If one Kerberos Agent goes down, it will not affect any other Kerberos Agent (or camera). It also makes it elegant to scale.
Scaling out Kerberos Agents
Starting with a few Kerberos Agents is straight forward, and scaling your Kerberos Agents is not a complex task at all. You can benefit from the different deployments models we have documented, to start exploring and scaling your Kerberos.io configuration.
Depending on your scenario you choose one deployment over the other. There is no golden rule for the best deployment, you should move forward with your preference and experience. A few examples:
If you are not into
kubernetes, and you have only a dozen of cameras, it might be more suitable to utilise
On the other hand if you have hundreds of cameras, and plan to install more over the next months and years, you will benefit from the elasticity
kubernetesprovides to you out of the box.
If you rather have non-technical users managing a video landscape, than Kerberos Factory might be a good choice.
Whatever you choose, you can always migrate from one option to the other, it’s just the engine on which the Kerberos Agent containers are running is updated.
Storing data where you want
Kerberos Agents are responsible for storing recordings and triggering events. By default all recordings are stored within the Kerberos Agent container, this means that if the containers stops, all your data will be lost. Luckily there are a couple of techniques to persist the data outside the container, without losing any information.
In most cases, especially with a growing video landscape, it’s more convenient to have a central storage system in place that scales, like Ceph, Minio, or cloud storage like S3 and other blob storage. This is exactly, where Kerberos Vault comes into the picture.
Kerberos Vault acts like a interface between your Kerberos Agents and your storage system. It is responsible for receiving recordings from your Kerberos Agents, and storing them in the storage system you’ve configured. By decoupling your Kerberos Agents with Kerberos Vault, you can switch the underlaying storage system on the fly, without requiring to reconfiguring all your Kerberos Agents.
Next to persisting your data in your storage system, Kerberos Vault also acts as an event producers. Each time a recording is successfully stored in your storage system, it will send a message to the configure Integration, such as Kafka, RabbitMQ, SQS, etc.
Centralise and governance
Scaling your Kerberos Agents and having a scalable and flexible storage system with Kerberos Vault is a strong foundation. However data just being stored in your storage system doesn’t bring any value.
Utilising that data to give your stakeholders insights through analytics, providing them with a decent data governance and combining it with live data is where the magic starts.
Kerberos Hub is our answer. It’s a highly scalable platform to connect stakeholders to sites and groups of cameras. It comes with all the features you would imagine: live streaming, object detection, fine-grained user access, alerts and more.
Kerberos Hub is build on top of Kubernetes and can be deployed, just like all the other components, where you want. It’s composed of a serie of microservices that can independently scale towards any demand, and utilises Open Source components such as Kafka, RabbitMQ, SQS, and others for high throughput messaging.
Kerberos.io comes with different components which you only install when required, there is no need to setup a sophisticated system from the beginning. Each component works on its own and is open and extensible through APIs. Our vision is to start small, with just a few Kerberos Agents, scaling up and introduce more components such as Kerberos Vault and Kerberos Hub when required for your use case.
If you need some help on possible deployments, have a look at the deployment page where we illustrate some examples.