Fullfører oppgave 15.7: Ressursforbruk-logging

Sentralisert logging av alle ressurskrevende operasjoner til
resource_usage_log-tabellen (opprettet i migrasjon 009).

Ny kode:
- resource_usage.rs: hjelpemodul med log() og find_collection_for_node()
- bandwidth.rs: Caddy JSON-logg-parser med nattlig batch-jobb (kl 03:00)

Logging lagt til i handlere:
- AI: summarize, ai_edges (token-telling via LiteLLM usage-felt),
  agent (placeholder — claude CLI gir ikke token-info)
- Whisper: duration_seconds, model, language, mode
- TTS: refaktorert til sentralisert modul, lagt til collection_id
- CAS: logger nye filer ved upload (ikke dedup)
- LiveKit: logger join-hendelser (faktisk deltaker-minutter
  krever webhook-integrasjon i fremtiden)

Caddy-config: JSON access logging aktivert for sidelinja.org og
synops.no i /srv/synops/config/caddy/Caddyfile (utenfor repo).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
vegard 2026-03-18 04:24:54 +00:00
parent 9a5d24c850
commit eb81055ef4
11 changed files with 566 additions and 23 deletions

View file

@ -199,3 +199,27 @@ forbrukt.
Båndbredde-logging skjer via Caddy-logg-parsing i nattlig batch-jobb Båndbredde-logging skjer via Caddy-logg-parsing i nattlig batch-jobb
(samme mønster som `docs/features/podcast_statistikk.md`). (samme mønster som `docs/features/podcast_statistikk.md`).
## Implementeringsstatus
Følgende ressurstyper logges til `resource_usage_log`:
| Ressurstype | Handler | Status |
|---|---|---|
| `ai` | `summarize.rs`, `ai_edges.rs`, `agent.rs` | Implementert. Token-telling fra LiteLLM `usage`-feltet. Agent bruker `claude` CLI og logger 0 tokens (CLI gir ikke token-info). |
| `whisper` | `transcribe.rs` | Implementert. Logger `duration_seconds`, `model`, `language`, `mode`. |
| `tts` | `tts.rs` | Implementert. Logger `provider`, `characters`, `voice_id`. |
| `cas` | `intentions.rs` (upload_media) | Implementert. Logger kun nye filer (ikke dedup). `hash`, `size_bytes`, `mime`, `operation`. |
| `livekit` | `intentions.rs` (join_communication) | Implementert. Logger `join`-hendelser. Faktisk `participant_minutes` krever LiveKit webhook-integrasjon (fremtidig). |
| `bandwidth` | `bandwidth.rs` | Implementert. Nattlig jobb (kl 03:00) parser Caddy JSON-access-logger. |
### Sentralisert hjelpemodul
`resource_usage.rs` tilbyr `log()` og `find_collection_for_node()`.
Alle handlers bruker denne for konsistent logging.
### Caddy-oppsett
JSON access logging er konfigurert i Caddyfile for `sidelinja.org`
og `synops.no`. Logger skrives til `/var/log/caddy/access-*.log`
med 100 MiB rotasjon og 7 filer beholdt.

View file

@ -8,6 +8,7 @@ use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::jobs::JobRow; use crate::jobs::JobRow;
use crate::resource_usage;
use crate::stdb::StdbClient; use crate::stdb::StdbClient;
#[derive(Debug)] #[derive(Debug)]
@ -255,6 +256,27 @@ Svar KUN med meldingsteksten.
.bind(agent_node_id).bind(communication_id).bind(job.id) .bind(agent_node_id).bind(communication_id).bind(job.id)
.execute(db).await.map_err(|e| format!("PG: {e}"))?; .execute(db).await.map_err(|e| format!("PG: {e}"))?;
// Logg til resource_usage_log (samlet ressurssporing)
let collection_id = resource_usage::find_collection_for_node(db, communication_id).await;
if let Err(e) = resource_usage::log(
db,
communication_id,
Some(sender_node_id),
collection_id,
"ai",
serde_json::json!({
"model_level": "deep",
"model_id": "claude-code-cli",
"tokens_in": 0,
"tokens_out": 0,
"job_type": "agent_respond"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge AI-ressursforbruk for agent_respond");
}
tracing::info!(reply_node_id = %reply_id, "Agent-svar persistert"); tracing::info!(reply_node_id = %reply_id, "Agent-svar persistert");
Ok(serde_json::json!({ Ok(serde_json::json!({

View file

@ -18,6 +18,7 @@ use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::jobs::JobRow; use crate::jobs::JobRow;
use crate::resource_usage;
use crate::stdb::StdbClient; use crate::stdb::StdbClient;
/// Eksisterende topic-node fra PG. /// Eksisterende topic-node fra PG.
@ -83,6 +84,18 @@ struct ChatMessage {
#[derive(Deserialize)] #[derive(Deserialize)]
struct ChatResponse { struct ChatResponse {
choices: Vec<Choice>, choices: Vec<Choice>,
#[serde(default)]
usage: Option<UsageInfo>,
#[serde(default)]
model: Option<String>,
}
#[derive(Deserialize, Clone)]
struct UsageInfo {
#[serde(default)]
prompt_tokens: i64,
#[serde(default)]
completion_tokens: i64,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@ -171,7 +184,7 @@ pub async fn handle_suggest_edges(
) )
}; };
let suggestion = call_llm(&user_content).await?; let (suggestion, llm_usage, llm_model) = call_llm(&user_content).await?;
tracing::info!( tracing::info!(
node_id = %node_id, node_id = %node_id,
@ -255,6 +268,31 @@ pub async fn handle_suggest_edges(
} }
}); });
// Logg AI-ressursforbruk
let collection_id = resource_usage::find_collection_for_node(db, node_id).await;
let (tokens_in, tokens_out) = llm_usage
.map(|u| (u.prompt_tokens, u.completion_tokens))
.unwrap_or((0, 0));
if let Err(e) = resource_usage::log(
db,
node_id,
source.created_by,
collection_id,
"ai",
serde_json::json!({
"model_level": "fast",
"model_id": llm_model.unwrap_or_else(|| "unknown".to_string()),
"tokens_in": tokens_in,
"tokens_out": tokens_out,
"job_type": "suggest_edges"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge AI-ressursforbruk for edge-forslag");
}
tracing::info!( tracing::info!(
node_id = %node_id, node_id = %node_id,
topics_created = created_topics, topics_created = created_topics,
@ -266,7 +304,8 @@ pub async fn handle_suggest_edges(
} }
/// Kall LiteLLM (OpenAI-kompatibelt API) for å analysere innhold. /// Kall LiteLLM (OpenAI-kompatibelt API) for å analysere innhold.
async fn call_llm(user_content: &str) -> Result<AiSuggestion, String> { /// Returnerer (forslag, usage, model).
async fn call_llm(user_content: &str) -> Result<(AiSuggestion, Option<UsageInfo>, Option<String>), String> {
let gateway_url = std::env::var("AI_GATEWAY_URL") let gateway_url = std::env::var("AI_GATEWAY_URL")
.unwrap_or_else(|_| "http://localhost:4000".to_string()); .unwrap_or_else(|_| "http://localhost:4000".to_string());
let api_key = std::env::var("LITELLM_MASTER_KEY") let api_key = std::env::var("LITELLM_MASTER_KEY")
@ -328,7 +367,7 @@ async fn call_llm(user_content: &str) -> Result<AiSuggestion, String> {
let suggestion: AiSuggestion = serde_json::from_str(content) let suggestion: AiSuggestion = serde_json::from_str(content)
.map_err(|e| format!("Kunne ikke parse LLM JSON: {e}. Rå output: {content}"))?; .map_err(|e| format!("Kunne ikke parse LLM JSON: {e}. Rå output: {content}"))?;
Ok(suggestion) Ok((suggestion, chat_resp.usage, chat_resp.model))
} }
/// Opprett en topic-node i PG og STDB. /// Opprett en topic-node i PG og STDB.

View file

@ -0,0 +1,274 @@
// Båndbredde-logging via Caddy-logg-parsing.
//
// Nattlig batch-jobb som leser Caddy JSON-access-logger og
// aggregerer bytes servert til resource_usage_log.
//
// Caddy JSON-format (relevante felter):
// { "request": { "uri": "/media/...", "headers": { "User-Agent": [...] } },
// "status": 200, "size": 84000000, "ts": 1710000000.0 }
//
// Ref: docs/features/ressursforbruk.md
use serde::Deserialize;
use sqlx::PgPool;
use uuid::Uuid;
/// Caddy JSON-loggformat (kun feltene vi trenger).
#[derive(Deserialize)]
struct CaddyLogEntry {
request: CaddyRequest,
#[serde(default)]
status: u16,
#[serde(default)]
size: u64,
#[serde(default)]
#[allow(dead_code)]
ts: f64,
}
#[derive(Deserialize)]
struct CaddyRequest {
uri: String,
#[serde(default)]
headers: std::collections::HashMap<String, Vec<String>>,
}
/// Aggregert båndbredde per path.
struct BandwidthEntry {
path: String,
size_bytes: u64,
client: String,
}
/// Parser en Caddy JSON-loggfil og returnerer båndbredde-entries
/// for media/CAS/pub-forespørsler med status 200-299.
fn parse_caddy_log(content: &str) -> Vec<BandwidthEntry> {
let mut entries = Vec::new();
for line in content.lines() {
let line = line.trim();
if line.is_empty() {
continue;
}
let entry: CaddyLogEntry = match serde_json::from_str(line) {
Ok(e) => e,
Err(_) => continue,
};
// Kun vellykkede forespørsler med innhold
if entry.status < 200 || entry.status >= 300 || entry.size == 0 {
continue;
}
// Kun media-, CAS- og publiserings-trafikk
let uri = &entry.request.uri;
if !uri.starts_with("/media/")
&& !uri.starts_with("/cas/")
&& !uri.starts_with("/pub/")
{
continue;
}
let client = entry
.request
.headers
.get("User-Agent")
.or_else(|| entry.request.headers.get("user-agent"))
.and_then(|vals| vals.first())
.map(|ua| parse_client_name(ua))
.unwrap_or_else(|| "unknown".to_string());
entries.push(BandwidthEntry {
path: uri.clone(),
size_bytes: entry.size,
client,
});
}
entries
}
/// Ekstraher klientnavn fra User-Agent-streng.
fn parse_client_name(ua: &str) -> String {
let ua_lower = ua.to_lowercase();
if ua_lower.contains("apple podcasts") || ua_lower.contains("applepodcasts") {
"Apple Podcasts".to_string()
} else if ua_lower.contains("spotify") {
"Spotify".to_string()
} else if ua_lower.contains("overcast") {
"Overcast".to_string()
} else if ua_lower.contains("pocket casts") || ua_lower.contains("pocketcasts") {
"Pocket Casts".to_string()
} else if ua_lower.contains("firefox") {
"Firefox".to_string()
} else if ua_lower.contains("chrome") {
"Chrome".to_string()
} else if ua_lower.contains("safari") {
"Safari".to_string()
} else if ua_lower.contains("bot") || ua_lower.contains("crawler") {
"bot".to_string()
} else {
// Returner første token (typisk klientnavn)
ua.split_whitespace()
.next()
.unwrap_or("unknown")
.chars()
.take(50)
.collect()
}
}
/// Kjør båndbredde-logg-parsing for en gitt loggfil.
/// Leser filen, parser entries, og skriver til resource_usage_log.
///
/// Bruker en "synops_bandwidth_system"-node som target_node_id
/// (systemet selv — ikke en spesifikk brukernode).
pub async fn parse_and_log_bandwidth(db: &PgPool, log_path: &str) -> Result<u64, String> {
let content = tokio::fs::read_to_string(log_path)
.await
.map_err(|e| format!("Kunne ikke lese loggfil {log_path}: {e}"))?;
let entries = parse_caddy_log(&content);
if entries.is_empty() {
tracing::info!(log_path = %log_path, "Ingen bandwidth-entries i loggfilen");
return Ok(0);
}
// Finn en system-node å logge mot. Bruk Sidelinja-samlingsnoden
// som fallback target for bandwidth-logging.
let system_node_id: Uuid = sqlx::query_scalar::<_, Uuid>(
"SELECT id FROM nodes WHERE node_kind = 'collection' ORDER BY created_at ASC LIMIT 1",
)
.fetch_optional(db)
.await
.map_err(|e| format!("PG-feil: {e}"))?
.unwrap_or_else(Uuid::nil);
let mut logged = 0u64;
for entry in &entries {
if let Err(e) = crate::resource_usage::log(
db,
system_node_id,
None, // system-jobb, ingen bruker
None,
"bandwidth",
serde_json::json!({
"size_bytes": entry.size_bytes,
"path": entry.path,
"client": entry.client
}),
)
.await
{
tracing::warn!(error = %e, path = %entry.path, "Kunne ikke logge bandwidth-entry");
} else {
logged += 1;
}
}
tracing::info!(
log_path = %log_path,
entries = entries.len(),
logged = logged,
"Bandwidth-logging fullført"
);
Ok(logged)
}
/// Start periodisk båndbredde-parsing (nattlig batch-jobb, kjører kl 03:00).
pub fn start_bandwidth_parser(db: PgPool) {
tokio::spawn(async move {
tracing::info!("Bandwidth-parser startet (nattlig jobb kl 03:00)");
loop {
// Beregn tid til neste kjøring (kl 03:00)
let now = chrono::Utc::now();
let today_03 = now
.date_naive()
.and_hms_opt(3, 0, 0)
.unwrap();
let next_run = if now.naive_utc() > today_03 {
// Allerede passert 03:00 i dag — kjør i morgen
today_03 + chrono::Duration::days(1)
} else {
today_03
};
let sleep_duration = (next_run - now.naive_utc())
.to_std()
.unwrap_or(std::time::Duration::from_secs(3600));
tracing::debug!(
next_run = %next_run,
sleep_secs = sleep_duration.as_secs(),
"Bandwidth-parser sover til neste kjøring"
);
tokio::time::sleep(sleep_duration).await;
// Kjør parsing for alle loggfiler
let log_files = vec![
"/var/log/caddy/access-sidelinja.log",
"/var/log/caddy/access-synops.log",
];
for path in &log_files {
match parse_and_log_bandwidth(&db, path).await {
Ok(count) => {
if count > 0 {
tracing::info!(path = %path, entries = count, "Bandwidth-logging fullført");
}
}
Err(e) => {
tracing::warn!(path = %path, error = %e, "Bandwidth-parsing feilet");
}
}
}
}
});
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_caddy_log_basic() {
let log = r#"{"request":{"uri":"/media/podcast/ep47.mp3","headers":{"User-Agent":["Apple Podcasts/1.0"]}},"status":200,"size":84000000,"ts":1710000000.0}
{"request":{"uri":"/api/health","headers":{}},"status":200,"size":42,"ts":1710000001.0}
{"request":{"uri":"/pub/sidelinja/article-1","headers":{"User-Agent":["Mozilla/5.0 Chrome/120"]}},"status":200,"size":15000,"ts":1710000002.0}
{"request":{"uri":"/media/podcast/ep47.mp3","headers":{}},"status":304,"size":0,"ts":1710000003.0}
"#;
let entries = parse_caddy_log(log);
assert_eq!(entries.len(), 2); // /api/health filtreres ut, 304 filtreres ut
assert_eq!(entries[0].path, "/media/podcast/ep47.mp3");
assert_eq!(entries[0].size_bytes, 84000000);
assert_eq!(entries[0].client, "Apple Podcasts");
assert_eq!(entries[1].path, "/pub/sidelinja/article-1");
assert_eq!(entries[1].client, "Chrome");
}
#[test]
fn test_parse_client_name() {
assert_eq!(parse_client_name("ApplePodcasts/3.0"), "Apple Podcasts");
assert_eq!(parse_client_name("Spotify/8.6.0"), "Spotify");
assert_eq!(
parse_client_name("Mozilla/5.0 (compatible; Googlebot/2.1)"),
"bot"
);
assert_eq!(
parse_client_name("Mozilla/5.0 Chrome/120.0"),
"Chrome"
);
}
#[test]
fn test_parse_caddy_log_empty() {
assert!(parse_caddy_log("").is_empty());
assert!(parse_caddy_log("\n\n").is_empty());
}
}

View file

@ -2071,6 +2071,32 @@ pub async fn upload_media(
None None
}; };
// -- Logg CAS-ressursforbruk (kun nye filer, ikke dedup) --
if !cas_result.already_existed {
let cas_collection_id = if let Some(src_id) = source_id {
crate::resource_usage::find_collection_for_node(&state.db, src_id).await
} else {
None
};
if let Err(e) = crate::resource_usage::log(
&state.db,
media_node_id,
Some(user.node_id),
cas_collection_id,
"cas",
serde_json::json!({
"hash": cas_result.hash,
"size_bytes": cas_result.size,
"mime": mime,
"operation": "store"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge CAS-ressursforbruk");
}
}
// -- Enqueue transkripsjons-jobb for lydfiler -- // -- Enqueue transkripsjons-jobb for lydfiler --
if is_audio_mime(&mime) { if is_audio_mime(&mime) {
let payload = serde_json::json!({ let payload = serde_json::json!({
@ -3450,6 +3476,26 @@ pub async fn join_communication(
"Bruker koblet til LiveKit-rom" "Bruker koblet til LiveKit-rom"
); );
// Logg LiveKit-ressursforbruk (registrering av deltaker-join)
let lk_collection_id = crate::resource_usage::find_collection_for_node(&state.db, comm_id).await;
if let Err(e) = crate::resource_usage::log(
&state.db,
comm_id,
Some(user.node_id),
lk_collection_id,
"livekit",
serde_json::json!({
"room_id": room_name,
"participant_minutes": 0,
"tracks": 0,
"event": "join"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge LiveKit-ressursforbruk");
}
Ok(Json(JoinCommunicationResponse { Ok(Json(JoinCommunicationResponse {
livekit_room_name: room_name, livekit_room_name: room_name,
livekit_token: token_result.token, livekit_token: token_result.token,

View file

@ -2,6 +2,7 @@ pub mod agent;
pub mod ai_admin; pub mod ai_admin;
pub mod ai_edges; pub mod ai_edges;
pub mod audio; pub mod audio;
pub mod bandwidth;
mod auth; mod auth;
pub mod cas; pub mod cas;
mod custom_domain; mod custom_domain;
@ -13,6 +14,7 @@ pub mod pruning;
mod queries; mod queries;
pub mod publishing; pub mod publishing;
pub mod health; pub mod health;
pub mod resource_usage;
pub mod resources; pub mod resources;
mod rss; mod rss;
mod serving; mod serving;
@ -164,6 +166,9 @@ async fn main() {
// Start A/B-evaluator i bakgrunnen (oppgave 14.17) // Start A/B-evaluator i bakgrunnen (oppgave 14.17)
publishing::start_ab_evaluator(db.clone()); publishing::start_ab_evaluator(db.clone());
// Start nattlig bandwidth-parsing (oppgave 15.7)
bandwidth::start_bandwidth_parser(db.clone());
let index_cache = publishing::new_index_cache(); let index_cache = publishing::new_index_cache();
let dynamic_page_cache = publishing::new_dynamic_page_cache(); let dynamic_page_cache = publishing::new_dynamic_page_cache();
let state = AppState { db, jwks, stdb, cas, index_cache, dynamic_page_cache, maintenance, priority_rules }; let state = AppState { db, jwks, stdb, cas, index_cache, dynamic_page_cache, maintenance, priority_rules };

View file

@ -0,0 +1,68 @@
// Ressursforbruk-logging — sentralisert hjelpemodul.
//
// Alle ressurskrevende operasjoner logger til `resource_usage_log`-tabellen
// som siste steg etter vellykket operasjon. Feilede jobber logges ikke.
//
// Ref: docs/features/ressursforbruk.md
use sqlx::PgPool;
use uuid::Uuid;
/// Logger en ressurshendelse til `resource_usage_log`.
///
/// - `target_node_id`: Noden som ble behandlet
/// - `triggered_by`: Brukeren som utløste det (None for system-jobber)
/// - `collection_id`: Samlingen det skjedde i (None hvis ukjent)
/// - `resource_type`: "ai", "whisper", "tts", "cas", "bandwidth", "livekit"
/// - `detail`: JSONB med type-spesifikke felter (se docs/features/ressursforbruk.md)
pub async fn log(
db: &PgPool,
target_node_id: Uuid,
triggered_by: Option<Uuid>,
collection_id: Option<Uuid>,
resource_type: &str,
detail: serde_json::Value,
) -> Result<(), String> {
sqlx::query(
r#"
INSERT INTO resource_usage_log (target_node_id, triggered_by, collection_id, resource_type, detail)
VALUES ($1, $2, $3, $4, $5)
"#,
)
.bind(target_node_id)
.bind(triggered_by)
.bind(collection_id)
.bind(resource_type)
.bind(&detail)
.execute(db)
.await
.map_err(|e| format!("Ressurslogging feilet: {e}"))?;
tracing::debug!(
target_node_id = %target_node_id,
resource_type = %resource_type,
"Ressursforbruk logget"
);
Ok(())
}
/// Finn samlings-ID for en node via belongs_to-edge.
/// Returnerer None hvis noden ikke tilhører en samling.
pub async fn find_collection_for_node(db: &PgPool, node_id: Uuid) -> Option<Uuid> {
sqlx::query_scalar::<_, Uuid>(
r#"
SELECT e.target_id FROM edges e
JOIN nodes n ON n.id = e.target_id
WHERE e.source_id = $1
AND e.edge_type = 'belongs_to'
AND n.node_kind = 'collection'
LIMIT 1
"#,
)
.bind(node_id)
.fetch_optional(db)
.await
.ok()
.flatten()
}

View file

@ -18,6 +18,7 @@ use sqlx::PgPool;
use uuid::Uuid; use uuid::Uuid;
use crate::jobs::JobRow; use crate::jobs::JobRow;
use crate::resource_usage;
use crate::stdb::StdbClient; use crate::stdb::StdbClient;
#[derive(sqlx::FromRow)] #[derive(sqlx::FromRow)]
@ -52,6 +53,18 @@ struct ChatMessage {
#[derive(Deserialize)] #[derive(Deserialize)]
struct ChatResponse { struct ChatResponse {
choices: Vec<Choice>, choices: Vec<Choice>,
#[serde(default)]
usage: Option<UsageInfo>,
#[serde(default)]
model: Option<String>,
}
#[derive(Deserialize, Clone)]
struct UsageInfo {
#[serde(default)]
prompt_tokens: i64,
#[serde(default)]
completion_tokens: i64,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@ -201,7 +214,7 @@ pub async fn handle_summarize_communication(
); );
// 5. Kall LiteLLM // 5. Kall LiteLLM
let summary_text = call_llm_summary(&user_content).await?; let (summary_text, llm_usage, llm_model) = call_llm_summary(&user_content).await?;
tracing::info!( tracing::info!(
communication_id = %communication_id, communication_id = %communication_id,
@ -318,6 +331,31 @@ pub async fn handle_summarize_communication(
"Sammendrag-node opprettet og knyttet til kommunikasjonsnode" "Sammendrag-node opprettet og knyttet til kommunikasjonsnode"
); );
// Logg AI-ressursforbruk
let collection_id = resource_usage::find_collection_for_node(db, communication_id).await;
let (tokens_in, tokens_out) = llm_usage
.map(|u| (u.prompt_tokens, u.completion_tokens))
.unwrap_or((0, 0));
if let Err(e) = resource_usage::log(
db,
communication_id,
Some(requested_by),
collection_id,
"ai",
serde_json::json!({
"model_level": "smart",
"model_id": llm_model.unwrap_or_else(|| "unknown".to_string()),
"tokens_in": tokens_in,
"tokens_out": tokens_out,
"job_type": "summarize_communication"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge AI-ressursforbruk for oppsummering");
}
Ok(serde_json::json!({ Ok(serde_json::json!({
"status": "completed", "status": "completed",
"summary_node_id": summary_node_id.to_string(), "summary_node_id": summary_node_id.to_string(),
@ -326,8 +364,8 @@ pub async fn handle_summarize_communication(
})) }))
} }
/// Kall LiteLLM for oppsummering. /// Kall LiteLLM for oppsummering. Returnerer (tekst, usage, model).
async fn call_llm_summary(user_content: &str) -> Result<String, String> { async fn call_llm_summary(user_content: &str) -> Result<(String, Option<UsageInfo>, Option<String>), String> {
let gateway_url = std::env::var("AI_GATEWAY_URL") let gateway_url = std::env::var("AI_GATEWAY_URL")
.unwrap_or_else(|_| "http://localhost:4000".to_string()); .unwrap_or_else(|_| "http://localhost:4000".to_string());
let api_key = std::env::var("LITELLM_MASTER_KEY").unwrap_or_default(); let api_key = std::env::var("LITELLM_MASTER_KEY").unwrap_or_default();
@ -381,5 +419,5 @@ async fn call_llm_summary(user_content: &str) -> Result<String, String> {
.and_then(|c| c.message.content.as_deref()) .and_then(|c| c.message.content.as_deref())
.ok_or("LiteLLM returnerte ingen content")?; .ok_or("LiteLLM returnerte ingen content")?;
Ok(content.to_string()) Ok((content.to_string(), chat_resp.usage, chat_resp.model))
} }

View file

@ -11,6 +11,7 @@ use uuid::Uuid;
use crate::cas::CasStore; use crate::cas::CasStore;
use crate::jobs::JobRow; use crate::jobs::JobRow;
use crate::resource_usage;
use crate::stdb::StdbClient; use crate::stdb::StdbClient;
/// Et parset SRT-segment. /// Et parset SRT-segment.
@ -157,6 +158,31 @@ pub async fn handle_whisper_job(
) )
.await?; .await?;
// Logg ressursforbruk (Whisper)
let triggered_by: Option<Uuid> = job.payload["requested_by"]
.as_str()
.and_then(|s| s.parse().ok());
let collection_id = resource_usage::find_collection_for_node(db, media_node_id).await;
let duration_seconds = (duration_ms as f64) / 1000.0;
if let Err(e) = resource_usage::log(
db,
media_node_id,
triggered_by,
collection_id,
"whisper",
serde_json::json!({
"model": model,
"duration_seconds": duration_seconds,
"language": language,
"mode": "batch"
}),
)
.await
{
tracing::warn!(error = %e, "Kunne ikke logge Whisper-ressursforbruk");
}
Ok(serde_json::json!({ Ok(serde_json::json!({
"segments": segments.len(), "segments": segments.len(),
"transcript_length": transcript_text.len(), "transcript_length": transcript_text.len(),

View file

@ -24,6 +24,7 @@ use uuid::Uuid;
use crate::cas::CasStore; use crate::cas::CasStore;
use crate::jobs::JobRow; use crate::jobs::JobRow;
use crate::resource_usage;
use crate::stdb::StdbClient; use crate::stdb::StdbClient;
/// Maks tekst-lengde for TTS (ElevenLabs grense er 5000 tegn per kall). /// Maks tekst-lengde for TTS (ElevenLabs grense er 5000 tegn per kall).
@ -179,22 +180,23 @@ pub async fn handle_tts_job(
} }
// 5. Logg ressursforbruk // 5. Logg ressursforbruk
sqlx::query( let collection_id = resource_usage::find_collection_for_node(db, media_node_id).await;
r#" if let Err(e) = resource_usage::log(
INSERT INTO resource_usage_log (target_node_id, triggered_by, resource_type, detail) db,
VALUES ($1, $2, 'tts', $3) media_node_id,
"#, Some(requested_by),
collection_id,
"tts",
serde_json::json!({
"provider": "elevenlabs",
"characters": text.len(),
"voice_id": voice_id
}),
) )
.bind(media_node_id)
.bind(requested_by)
.bind(serde_json::json!({
"provider": "elevenlabs",
"characters": text.len(),
"voice_id": voice_id
}))
.execute(db)
.await .await
.map_err(|e| format!("Ressurslogging feilet: {e}"))?; {
tracing::warn!(error = %e, "Kunne ikke logge TTS-ressursforbruk");
}
Ok(serde_json::json!({ Ok(serde_json::json!({
"status": "completed", "status": "completed",

View file

@ -169,8 +169,7 @@ Uavhengige faser kan fortsatt plukkes.
- [x] 15.4 AI Gateway-konfigurasjon: admin-UI for modelloversikt, API-nøkler (kryptert), ruting-regler per jobbtype, fallback-kjeder, forbruksoversikt per samling. Ref: `docs/infra/ai_gateway.md`. - [x] 15.4 AI Gateway-konfigurasjon: admin-UI for modelloversikt, API-nøkler (kryptert), ruting-regler per jobbtype, fallback-kjeder, forbruksoversikt per samling. Ref: `docs/infra/ai_gateway.md`.
- [x] 15.5 Ressursstyring: prioritetsregler mellom jobbtyper, ressursgrenser per worker, ressurs-governor for automatisk nedprioritering under aktive LiveKit-sesjoner, disk-status med varsling. - [x] 15.5 Ressursstyring: prioritetsregler mellom jobbtyper, ressursgrenser per worker, ressurs-governor for automatisk nedprioritering under aktive LiveKit-sesjoner, disk-status med varsling.
- [x] 15.6 Serverhelse-dashboard: tjeneste-status (PG, STDB, Caddy, Authentik, LiteLLM, Whisper, LiveKit), metrikker (CPU, minne, disk), backup-status, logg-tilgang. - [x] 15.6 Serverhelse-dashboard: tjeneste-status (PG, STDB, Caddy, Authentik, LiteLLM, Whisper, LiveKit), metrikker (CPU, minne, disk), backup-status, logg-tilgang.
- [~] 15.7 Ressursforbruk-logging: `resource_usage_log`-tabell i PG. Maskinrommet logger AI-tokens (inn/ut, modellnivå), Whisper-tid (sek), TTS-tegn, CAS-lagring (bytes), LiveKit-tid (deltaker-min). Båndbredde via Caddy-logg-parsing. Ref: `docs/features/ressursforbruk.md`. - [x] 15.7 Ressursforbruk-logging: `resource_usage_log`-tabell i PG. Maskinrommet logger AI-tokens (inn/ut, modellnivå), Whisper-tid (sek), TTS-tegn, CAS-lagring (bytes), LiveKit-tid (deltaker-min). Båndbredde via Caddy-logg-parsing. Ref: `docs/features/ressursforbruk.md`.
> Påbegynt: 2026-03-18T04:14
- [ ] 15.8 Forbruksoversikt i admin: aggregert visning per samling, per ressurstype, per tidsperiode. Drill-down til jobbtype og modellnivå. - [ ] 15.8 Forbruksoversikt i admin: aggregert visning per samling, per ressurstype, per tidsperiode. Drill-down til jobbtype og modellnivå.
- [ ] 15.9 Brukersynlig forbruk: hver bruker ser eget forbruk i profil/innstillinger. Per-node forbruk synlig i node-detaljer for eiere. - [ ] 15.9 Brukersynlig forbruk: hver bruker ser eget forbruk i profil/innstillinger. Per-node forbruk synlig i node-detaljer for eiere.