# Collection Level

The agent enables customizable metric collection through [Agent CLI](https://docs.datasentinel.io/manual/implementation/agent-usage/cli) or [Agent API](https://docs.datasentinel.io/manual/implementation/agent-usage/api-reference).&#x20;

## Display Properties

***

```bash
datasentinel show connection <<connection_name>>
```

{% code title="Output" %}

```
Copyright 2023 (c) datasentinel- All rights reserved        www.datasentinel.io
================================================================================

                    Name : myNewConnection
                  Status : enabled
                  State : connected

                    Host : pg-sales-1734
                    Port : 9342

                    user : datasentinel
                password : sentinel

        Lock monitoring : on
  Lock monitoring delay : 30s
        Table monitoring : on

          Query samples : off

        Query monitoring : on
                Optimize : on
              Min calls : 2
                Min time : 1s

                    tags : application=sales,environment=production,provider=amazon,datacenter=lyon
```

{% endcode %}

{% hint style="info" %}
The connection settings are saved in the **`$HOME/.datasentinel/connections.yml`** file
{% endhint %}

## Query Samples

***

{% hint style="info" %}
Default is **off** since v3.2, was **on** in older versions
{% endhint %}

If enabled, the agent collects a sample of each executed query every hour.

The sample can contains the literal values of the query, as well as any other relevant information such as the time the query was collected and the database in which the query was executed.

The collected samples can be used for a variety of purposes, such as analyzing query performance, identifying common issues or errors, and optimizing the database.

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#query-samples) example

```bash
datasentinel update connection <<connection_name>> samples on
```

## Lock Monitoring

***

> Default is **on**
>
> Default lock monitoring delay is **30** seconds

The agent checks the database every minute for any blocking scenarios that may be occurring.

A blocking session is detected if its transaction has been blocking at least one other session for a period of time longer than the registered delay.

In that case, the agent proceeds to further investigate and gather information about the blocking session, the sessions that are being blocked, and the resources that are being locked as well as the SQL statements being concerned.

A blocking scenario is then created and sent to the platform. It can then be analyzed through the user interface in order to understand the cause

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#lock-monitoring) example

```bash
datasentinel update connection <<connection_name>> lock-monitoring off
```

* Lock monitoring delay example

```bash
datasentinel update connection <<connection_name>> lock-monitoring-delay 60
```

## Table Monitoring

***

The agent collects activity metrics for tables and indexes on all databases in the cluster once per hour. This includes metrics such as the number of rows inserted, updated, or deleted, as well as the IO activity, cache activity, amount of storage space used by the tables and indexes.

The agent checks the number of tables in all databases against the table monitoring limit set at the agent level. The limit is set to **1000** by default, but it can be changed by the user if necessary.

If the number of tables is greater than the limit, the agent deactivates the monitoring of tables and indexes for that cluster. This is done to prevent the agent from becoming overwhelmed by the large number of tables and indexes, which could lead to performance issues.

If the number of tables is less than or equal to the limit, the agent then checks the flag set at the cluster level.

If the flag is set to **on**, the agent continues to monitor the tables and indexes for that cluster. This allows the user to control whether or not table monitoring is active for a particular cluster.

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#table-monitoring) example

```bash
datasentinel update connection <<connection_name>> table-monitoring off
```

## Query Monitoring

***

Agent collects query execution statistics using **pg\_stat\_statements** extension.

The extension replaces constants in queries with a parameter symbol (such as $1), which allows for aggregation of similar queries.

However, even with this technique, there may still be situations where a large number of unique queries are registered. In order to prevent the agent from becoming overwhelmed by this large number of unique queries and experiencing performance issues, a limit (referred as **query-monitoring-limit**) is set at the agent level, with a default value of **15000**.

This limit represents the number of unique queries per day that can be collected and stored in the agent’s dedicated cache. If the number of unique queries exceeds this limit, the agent will deactivate the monitoring of queries.

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#query-monitoring) example

```bash
datasentinel update connection <<connection_name>> query-monitoring off
```

* Update the Limit

```bash
datasentinel set query-monitoring-limit 70000
```

## Query Monitoring Filters

***

Retrieve **pg\_stat\_statements** queries executed n times (calls) **OR** with total execution time exceeding specified seconds

> Default is **1s**

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#query-monitoring-min-calls) example

```bash
datasentinel update connection <<connection_name>> query-monitoring-min-calls 50
```

If you want to view all queries registered in **pg\_stat\_statements**, set the value to **0**.

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#query-monitoring-min-time) example

```bash
datasentinel update connection <<connection_name>> query-monitoring-min-time 10
```

## Query Monitoring Optimize

***

Enabling this option can be beneficial if many queries present in **pg\_stats\_statements** are similar. (See some examples below)

In this case, Datasentinel will replace the similar part of the text and aggregate their execution statistics to finally have only one distinct query.

There is a slight overhead as the agent will retrieve the text of the active queries at each collection (every minute) to analyze it.

The following command allows the agent to test this functionality on the **pg\_stat\_statements** view.

```bash
datasentinel update connection <<connection_name>> test-query-monitoring-optimize
```

{% code title="Response" %}

```json
{
  "pg_stat_statements": 964,
  "min_calls_filter": 21,
  "min_time_filter": "33s",
  "filtered_statements_get_time": "153ms",
  "filtered_statements_with_min_calls_or_min_time": 109,
  "filtered_ratio": "89%",
  "optimized_statements": 109,
  "optimized_ratio": "0%",
  "optimized_statements_run_time": "8ms"
}
```

{% endcode %}

The report displayed, and in particular columns **optimized\_statements** and **optimized\_ratio**, provides information on the benefit of this option. \
Do not hesitate to contact support at [support@datasentinel.io](mailto:support%40datasentinel.io) for more details.

* [Agent CLI](https://docs.datasentinel.io/manual/implementation/cli#test-query-monitoring-optimization) example: enabling this option

```
datasentinel update connection <<connection_name>> query-monitoring-optimize on
```

Similar queries in **pg\_stat\_statements**

* Multi-value insert statements:

```sql
insert into history (name, value, created) values ($1,$2,$3);
insert into history (name, value, created) values ($1,$2,$3), ($1,$2,$3);
insert into history (name, value, created) values ($1,$2,$3), ($1,$2,$3), ($1,$2,$3);
```

* List statements:

```sql
select * from pgench_branches where bid in ($1,$2,$3)
select * from pgench_branches where bid in ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10);
select * from pgench_branches where bid in ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13);
```

{% hint style="info" %}
The agent’s Optimize feature combines runtime metrics for unique statements and adjusts text accordingly.
{% endhint %}

Merged query text example:

```sql
select * from pgench_branches where bid in ($n,..,$n)
```
