Artificial intelligence
Last updated
Last updated
Artificial intelligence events can be managed in two ways:
using Giraffe's own AI detection features
using the camera's onboard AI features (and optionally combined with a filtering service such as Calipsa or DeepAlert.
Giraffe are training our own AI model predominantly on a construction site environment. We are training the model to identify vehicles and people.
When designing our AI system, we wanted to keep the configuration required per camera to a minimum. This is why you will not find features like line crossing to setup.
The way the system works is as follows:
We identify any people or vehicles in each frame of the video.
We store the position of each of those objects
In any subsequent frames, if the position of each tracked object has changed, we generate an alarm
When an alarm is raised, we take the previous 15 seconds before and an alarm, and the 15 seconds after the alarm.
The AI is configured on a per camera basis on the Analytics tab
Enabled. This controls whether the video stream from this camera is fed through the AI pipeline. If this is disabled, no AI events will be generated.
Scene. When it is set to 'Near', the video stream will be fed to the AI pipeline 'as is without any size modification. When it is set to 'Far', the video stream is first split into 4 quarters, and each quarter is fed to the AI pipeline individually before being stitched together again at the end.
The benefit of quartering the image is that very far away objects will be detected more accurately. The downside is that it will take 4x as long to check each image, so the Edge Controller cannot process as many frames per second. Generally this is not a problem, because the further away the object is, the longer it will be in the field of view of the camera for.
Instant Alarm On Detection. By default, the AI algorithm will only send an event when an object has been detected in two subsequent frames, and the position between the first and the second frames has moved by a sufficient amount. This reduces the number of false positives and prevents stationary cars from generating repetitive alerts.
If instant alarm on detection is enabled, an event will be triggered immediately if a person is detected. This means the person will only have to be detected in one frame without moving. This setting has no effect on vehicle detection.
Upload Video Clip. By default, the Edge Controller will upload 7 snapshot images (either side of the frame that triggered the event), and a 30 second video (15 seconds either side of the event).
If this setting is disabled, the Edge Controller will skip uploading the video clip, and only upload the snapshot images.
You can view a low frames per second preview of what the AI is seeing using this feature. Any humans or vehicles will be marked on the image, and any mask configured will be overlaid in a translucent white colour.
As of Edge Controller firmware 3.9, cameras can send SMTP alarm emails containing snapshots direct to the Edge Controller. The Edge Controller will then process these alarms in the same way as if it had generated the alarms itself.
The alarms will be forwarded to Giraffe Cloud in the normal way and also forwarded onto alarm receiving centres.
Any schedules or arming rules setup will be respected in the normal way.
Please see the Edge Controller SMTP documentation section for more detail on how to set this up.
See the page for details on how to view uploaded video events.