@@ -46,28 +46,35 @@ Enabling `POLL_STATS`
4646 This is an experimental feature.
4747
4848
49- BBBLB can store meeting statistics (namely `users `, `voice ` and `video ` counts) into the
50- database, allowing you to run your own queries and analytics. This is disabled by default,
51- because one database row per meeting per `POLL_INTERVAL ` quickly adds up and there is no
52- automatic cleanup. You'll have to delete old rows yourself and make sure your database can
53- handle it. On the plus side, those numbers won't contain any personal data, just meeting
54- IDs and counters.
55-
56- Once activated with the `POLL_STATS ` setting, the meeting poller will store the current
57- user, voice and video stream count per meeting on each poll. You can write your own SQL
58- queries to get what you need. A common approach would be to fetch all rows in a certain
59- time range, calculate average values per meeting, then group those together by tenant.
60-
61- Here is an (untested) example query that shows most of the techniques:
49+ BBBLB can collect and store detailed meeting statistics in the database, allowing you to
50+ run your own SQL queries to get any metrics you may need. Once activated with the
51+ `POLL_STATS ` setting, the meeting poller will store the current `users `, `voice ` and
52+ `video ` counts for each running meeting on each server poll. Those statistics do not
53+ contain any personal data, which makes this approach very GDPR friendly.
54+
55+ The `POLL_STATS ` feature is disabled by default, because the database table will grow by
56+ one row per meeting per `POLL_INTERVAL ` and there is no automatic cleanup. This adds up
57+ quickly, especially for large or busy clusters. Make sure to delete old rows regularly
58+ to keep your database size in check.
59+
60+ The `meeting_stats ` table is structured similar to a time series database. Each row has
61+ a timestamp (`ts `), the `uuid ` of the meeting, the reuseable external `meeting_id ` that
62+ was used to create the meeting, the tenant (`tenant_fk `), and three metric values named
63+ `users `, `voice ` and `video `.
64+
65+ Here is an (untested) example PostgreSQL query returning some useful aggregations. It
66+ fetches all rows in a certain time range, calculate min/max/average values per meeting
67+ (per `uuid `), then groups those together by `tenant_fk ` to get meaningfull aggregated
68+ values per tenant:
6269
6370.. code :: sql
6471
6572 SELECT
6673 tenants.name,
67- /* Total number of meeting minutes spent by each participant */
68- SUM(users_avg * EXTRACT(epoch FROM started - ended )) / 60,
74+ /* Total number of meeting minutes spent by all users combined */
75+ SUM(users_avg * EXTRACT(epoch FROM duration )) / 60,
6976 /* Average meeting duration in minutes */
70- AVG(EXTRACT(epoch FROM started - ended )) / 60,
77+ AVG(EXTRACT(epoch FROM duration )) / 60,
7178 /* Aveage meeting size */
7279 AVG(users_avg),
7380 /* Maximum meeting size */
@@ -80,8 +87,7 @@ Here is an (untested) example query that shows most of the techniques:
8087 SELECT
8188 tenant_fk,
8289 uuid,
83- MIN(ts) AS started,
84- MAX(ts) AS ended,
90+ MAX(ts) - MIN(ts) as duration
8591 AVG(users) AS users_avg
8692 MAX(users) AS users_max
8793 FROM meeting_stats
0 commit comments