# Artificial intelligence

Artificial intelligence events can be managed in two ways:

* using the Edge Controller's built in AI features
* using the camera's onboard AI features (and getting the camera to forward the alarms to the Edge Controller via SMTP)

## Giraffe AI

Giraffe are training our own AI models predominantly on a construction site environment. We are training the model to identify vehicles and people.

### How it works

When designing our AI system, we wanted to keep the configuration required per camera to a minimum. This is why you will not find features like line crossing to setup.

The way the system works is as follows:

1. We identify any people or vehicles in each frame of the video
2. We store the position of each of those objects
3. In any subsequent frames, if the position of each tracked object has changed, we generate an alarm

### Configuration options

The AI is configured on a per camera basis on the Analytics tab (Settings -> Systems -> select the system -> Cameras -> select the camera -> Analytics)

<figure><img src="/files/jqsiIeVE3dU1ysMXNAfF" alt=""><figcaption></figcaption></figure>

**Enabled.** This controls whether the video stream from this camera is fed through the AI pipeline. If this is disabled, no AI events will be generated.

**Upload Video Clip.** By default, the Edge Controller will upload 5 snapshot images (the event, plus two either side of the frame that triggered the event), and optionally a 30 second video (15 seconds either side of the event).

This setting configures when the video clip gets uploaded:

**Always**. This means we will try to upload the video clip as soon as possible.

**Never**. This means we will never upload video clips.

**When Verified**. This means we will wait for the Giraffe Cloud to perform further analysis of the snapshots prior to deciding whether to upload the video or not.

The way this works is that when a video event is created, a 30 second timer is started on the Edge Controller. During that 30 seconds, the Edge Controller waits for a response from Giraffe Cloud to tell it whether the event was true event or a false alert. If it was a true alert, the Edge Controller will upload the video clip. If it's a false alert, the video clip is discarded.

If the 30 second timer expires without receiving a response from Giraffe cloud, the video clip will be uploaded anyway.

{% hint style="warning" %}
The 'When Verified' setting only makes sense if Giraffe Cloud Filtering is enabled. If Cloud Filtering is disabled, the video clip will be uploaded anyway (the Edge Controller treats 'skipped' the same as 'true event').
{% endhint %}

### Detection method

The Edge Controller can process frames from cameras in a number of different ways depending on the hardware available and the desired level of sensitivity.

### AI

This is the default setting and what we recommend for most scenarios. The Edge Controller relies on using an AI model to look for people and moving vehicles in the camera footage.

This setting only works if there is an AI accelerator connected to the Edge Controller. If the AI accelerator is unavailable or not working, the Edge Controller will automatically fall back to the motion detection method and an error state will be raised.

Please see the [Giraffe AI](/edge-controller/giraffe-ai.md) page for more details about the performance and capability of our AI feature.

When events are uploaded, they are categorised as either 'person' or 'vehicle'.

#### Motion

This is an advanced motion detection algorithm that looks for pixel changes in the image.

It is very sensitive and will identify even very small objects that move in the scene as long as there is enough visual contrast.

However, as with all motion detection algorithms it is affected by rain and flying objects moving in front of the camera.

We do not recommend using motion without either using Giraffe Cloud Filtering or forwarding the alarms to an alarm receiving centre that uses a cloud filtering solution.

Motion works best with thermal imaging cameras as they are not affected by rain.

When events are uploaded, they are categorised as 'unknown' events.

#### Motion + AI

This is a combination of the two previous options and is only recommended when extreme sensitivity is required.

The AI model is sometimes able to identify very small objects far away that motion detection would miss. Likewise, motion detection will sometimes trigger on things that AI does not recognise as a threat.

When events are uploaded, they are categorised as 'vehicle' or 'person' where possible, or if only detected by motion, they are categorised as 'unknown'.

### Person confidence threshold

This threshold is used when the detection method is `AI` and a person is detected.

If the AI processor on the Edge Controller fails, this threshold will also be used by the motion detection algorithm to determine how many pixels must change in order to trigger motion detection.&#x20;

### Vehicle confidence threshold

This threshold is used when the detection method is `AI` and a vehicle is detected.&#x20;

It is not used during motion detection ever.

### Motion confidence threshold

This threshold is used when the detection method is `Motion` or `Motion+AI`.

It configures how many pixels must change within an area to trigger motion detection event.

### Minimum / maximum object size

<figure><img src="/files/yDNqur8dkSHsNPHRrLvv" alt="" width="375"><figcaption></figcaption></figure>

Depending on what setting you have enabled, it is possible to configure the minimum and maximum area of the frame that will trigger an alert.

This can be useful for filtering out small changes - such as leaves or rubbish blowing in the wind - but making sure that larger movement (like people, vehicles) are still detected.

### Event viewing

See the [video event history](/cloud/alarm-settings/alarm-handling.md) page for details on how to view uploaded video events.

### AI preview

You can view a low frames per second preview of what the AI is seeing using this feature. Any humans or vehicles will be marked on the image, and any mask configured will be overlaid in a translucent white colour.

<figure><img src="/files/8bJClG5WehVOBIn4DGqV" alt=""><figcaption></figcaption></figure>

## Cloud filtering

If you have cloud filtering enabled for a camera, it is possible to configure the confidence scores for both person and vehicle detections.

Cloud filtering will use the same minimum and maximum object sizes as defined for the Edge Controller.

## On camera AI

Cameras can send SMTP alarm emails containing snapshots direct to the Edge Controller. The Edge Controller will then process these alarms in the same way as if it had generated the alarms itself.

The alarms will be forwarded to Giraffe Cloud in the normal way and also forwarded onto alarm receiving centres.

Any schedules or arming rules setup will be respected in the normal way.

Please see the [Edge Controller SMTP documentation](/edge-controller/smtp-alarm-receiver.md) section for more detail on how to set this up. Once the alarm is received by the Edge Controller, it is handled the same as a motion detection alarm and categorised as 'unknown'.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.giraffecctv.com/cloud/camera-setup/artificial-intelligence.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
