Skip to content

Architecture

Comprehensive guide to Studio Platform's system architecture, including microservices design, data flow, security model, and deployment patterns.

🏗️ System Architecture Overview

High-Level Architecture

Studio Platform is built on a microservices architecture that provides scalability, resilience, and maintainability. The system is designed to handle enterprise-scale compliance operations while maintaining security and performance.

graph TD
    A[Client Applications] --> B[API Gateway]
    B --> C[Frontend Service]
    B --> D[Backend Service]
    B --> E[AI Service]
    B --> F[Document Service]

    D --> G[PostgreSQL]
    D --> H[Neo4j]
    D --> I[Redis]

    E --> J[Vector Store]
    E --> K[AI Models]

    F --> L[MinIO Storage]
    F --> M[Processing Queue]

    N[Identity Service] --> B
    O[Policy Service] --> B
    P[Monitoring Service] --> Q[Prometheus]
    P --> R[Grafana]

    S[Fleet Service] --> G
    T[Prowler Service] --> G
    U[Observability Stack] --> P

Architecture Principles

Core Principles: - Microservices - Service-oriented architecture - Cloud-Native - Designed for cloud deployment - API-First - API-driven design - Event-Driven - Event-based communication - Security-First - Security by design - Scalability - Horizontal scalability - Resilience - Fault tolerance and recovery

Design Patterns: - CQRS - Command Query Responsibility Segregation - Event Sourcing - Event-driven state management - Saga Pattern - Distributed transaction management - Circuit Breaker - Fault tolerance pattern - API Gateway - Single entry point - Service Mesh - Service communication

🔌 Microservices Architecture

Service Components

Frontend Service

Technology Stack: - Framework - Next.js 13+ - Language - TypeScript - Styling - Tailwind CSS - State Management - React Query - UI Components - Radix UI

Responsibilities: - User Interface - Web-based user interface - Client-Side Logic - Frontend business logic - User Experience - UX optimization - Authentication - Client-side authentication - API Communication - Backend API integration

Service Configuration:

# frontend-service.yml
service:
  name: frontend
  version: 1.0.0
  port: 3000
  replicas: 2

resources:
  cpu: 500m
  memory: 1Gi

environment:
  NODE_ENV: production
  NEXT_PUBLIC_API_URL: http://api-gateway:8000
  NEXT_PUBLIC_WS_URL: ws://api-gateway:8000

health:
  path: /api/health
  interval: 30s
  timeout: 10s
  retries: 3

Backend Service

Technology Stack: - Runtime - Node.js 18+ - Framework - Express.js - Language - TypeScript - Database - PostgreSQL, Neo4j, Redis - ORM - Prisma

Responsibilities: - API Management - RESTful API endpoints - Business Logic - Core business logic - Data Management - Database operations - Authentication - User authentication - Authorization - Access control

Service Configuration:

# backend-service.yml
service:
  name: backend
  version: 1.0.0
  port: 4000
  replicas: 3

resources:
  cpu: 1000m
  memory: 2Gi

environment:
  NODE_ENV: production
  DATABASE_URL: postgresql://user:pass@postgres:5432/studio
  NEO4J_URL: bolt://neo4j:7687
  REDIS_URL: redis://redis:6379

database:
  postgres:
    host: postgres
    port: 5432
    database: studio
    username: studio
    password: ${POSTGRES_PASSWORD}

  neo4j:
    host: neo4j
    port: 7687
    username: neo4j
    password: ${NEO4J_PASSWORD}

  redis:
    host: redis
    port: 6379
    db: 0

AI Service

Technology Stack: - Runtime - Python 3.11+ - Framework - FastAPI - AI Models - Google Gemini, OpenAI - Vector Database - ChromaDB - Processing - Celery

Responsibilities: - AI Assistant - Conversational AI interface - Policy Generation - Automated policy creation - Evidence Analysis - Document analysis - Compliance Insights - AI-powered insights - Natural Language Processing - Text processing

Service Configuration:

# ai-service.yml
service:
  name: ai-service
  version: 1.0.0
  port: 5000
  replicas: 2

resources:
  cpu: 2000m
  memory: 4Gi

environment:
  PYTHONPATH: /app
  GOOGLE_API_KEY: ${GOOGLE_API_KEY}
  OPENAI_API_KEY: ${OPENAI_API_KEY}
  CHROMA_HOST: chroma
  CHROMA_PORT: 8000

ai_models:
  primary: gemini-2.5-flash
  fallback: gpt-4
  temperature: 0.7
  max_tokens: 4096

vector_store:
  type: chroma
  host: chroma
  port: 8000
  collection: studio_embeddings

Document Service

Technology Stack: - Runtime - Node.js 18+ - Framework - Express.js - Storage - MinIO - Processing - Sharp, PDF.js - Queue - Redis Queue

Responsibilities: - File Upload - Document upload handling - File Processing - Document processing - Storage Management - File storage - Metadata Extraction - Document metadata - Thumbnail Generation - Image thumbnails

Service Configuration:

# document-service.yml
service:
  name: document-service
  version: 1.0.0
  port: 5002
  replicas: 2

resources:
  cpu: 1000m
  memory: 2Gi

environment:
  NODE_ENV: production
  MINIO_ENDPOINT: minio:9000
  MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
  MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
  MINIO_BUCKET: evidence

storage:
  type: minio
  endpoint: minio:9000
  access_key: ${MINIO_ACCESS_KEY}
  secret_key: ${MINIO_SECRET_KEY}
  bucket: evidence
  region: us-east-1

processing:
  max_file_size: 100MB
  supported_formats: pdf,doc,docx,xls,xlsx,png,jpg
  thumbnail_size: 200x200

Service Communication

API Gateway

Gateway Configuration:

# api-gateway.yml
service:
  name: api-gateway
  version: 1.0.0
  port: 8000
  replicas: 2

routes:
  - path: /api/v1/auth
    service: identity-service
    methods: [POST, GET, DELETE]

  - path: /api/v1/users
    service: backend-service
    methods: [GET, POST, PUT, DELETE]

  - path: /api/v1/projects
    service: backend-service
    methods: [GET, POST, PUT, DELETE]

  - path: /api/v1/evidence
    service: document-service
    methods: [GET, POST, PUT, DELETE]

  - path: /api/v1/ai
    service: ai-service
    methods: [POST, GET]

plugins:
  - name: rate-limiting
    config:
      minute: 1000
      hour: 10000

  - name: cors
    config:
      origins: ["*"]
      methods: [GET, POST, PUT, DELETE, OPTIONS]

  - name: jwt
    config:
      secret_is_base64: false

Service Mesh

Mesh Configuration:

# service-mesh.yml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: studio-services
spec:
  hosts:
    - studio.local
  gateways:
    - studio-gateway
  http:
    - match:
        - uri:
            prefix: /api/v1/users
      route:
        - destination:
            host: backend-service
            port:
              number: 4000
      timeout: 30s
      retries:
        attempts: 3

🗄️ Data Architecture

Database Architecture

PostgreSQL (Primary Database)

Schema Design:

-- Users table
CREATE TABLE users (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    email VARCHAR(255) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    name VARCHAR(255) NOT NULL,
    role VARCHAR(50) NOT NULL DEFAULT 'customer',
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    last_login TIMESTAMP
);

-- Projects table
CREATE TABLE projects (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name VARCHAR(255) NOT NULL,
    description TEXT,
    framework VARCHAR(50) NOT NULL,
    status VARCHAR(50) NOT NULL DEFAULT 'active',
    compliance_score INTEGER DEFAULT 0,
    created_by UUID REFERENCES users(id),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Evidence table
CREATE TABLE evidence (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    title VARCHAR(255) NOT NULL,
    description TEXT,
    file_name VARCHAR(255) NOT NULL,
    file_size BIGINT NOT NULL,
    content_type VARCHAR(100) NOT NULL,
    file_path VARCHAR(500) NOT NULL,
    project_id UUID REFERENCES projects(id),
    control_id VARCHAR(100) NOT NULL,
    uploaded_by UUID REFERENCES users(id),
    quality_score INTEGER DEFAULT 0,
    status VARCHAR(50) NOT NULL DEFAULT 'pending',
    uploaded_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Controls table
CREATE TABLE controls (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    framework VARCHAR(50) NOT NULL,
    control_number VARCHAR(50) NOT NULL,
    title VARCHAR(255) NOT NULL,
    description TEXT,
    category VARCHAR(100),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Database Configuration:

# postgresql.yml
database:
  name: studio
  user: studio
  password: ${POSTGRES_PASSWORD}
  host: postgres
  port: 5432

  connections:
    min: 5
    max: 20
    idle_timeout: 300
    max_lifetime: 3600

  performance:
    shared_buffers: 256MB
    effective_cache_size: 1GB
    work_mem: 4MB
    maintenance_work_mem: 64MB
    checkpoint_completion_target: 0.9
    wal_buffers: 16MB
    default_statistics_target: 100

Neo4j (Graph Database)

Graph Model:

-- User relationships
CREATE (u:User {id: 'user_123', name: 'John Doe', email: '[email protected]'})
CREATE (p:Project {id: 'proj_123', name: 'SOC 2 Assessment', framework: 'soc2'})
CREATE (c:Control {id: 'ctrl_123', number: 'A1.1', title: 'Information Security Policies'})

-- Relationships
CREATE (u)-[:MEMBER_OF {role: 'manager'}]->(p)
CREATE (p)-[:INCLUDES]->(c)
CREATE (u)-[:OWNS]->(c)

Neo4j Configuration:

# neo4j.yml
database:
  name: neo4j
  user: neo4j
  password: ${NEO4J_PASSWORD}
  host: neo4j
  port: 7687

  memory:
    heap:
      initial: 512m
      max: 2G
    pagecache:
      size: 1G

  security:
    auth_enabled: true
    auth_minimum_password_length: 8

  network:
    default_advertised_address: neo4j
    default_listen_address: 0.0.0.0

Redis (Cache and Queue)

Redis Configuration:

# redis.yml
database:
  host: redis
  port: 6379
  db: 0

  memory:
    maxmemory: 2gb
    maxmemory_policy: allkeys-lru

  persistence:
    save: "900 1 300 10 60 10000"
    appendonly: yes
    appendfsync: everysec

  security:
    requirepass: ${REDIS_PASSWORD}

  performance:
    tcp-keepalive: 300
    timeout: 0

Data Flow Architecture

Data Flow Patterns

Command Flow:

sequenceDiagram
    participant Client
    participant Gateway
    participant Backend
    participant Database
    participant Cache

    Client->>Gateway: POST /api/v1/projects
    Gateway->>Backend: Forward request
    Backend->>Cache: Check cache
    Cache-->>Backend: Cache miss
    Backend->>Database: Insert project
    Database-->>Backend: Project created
    Backend->>Cache: Update cache
    Backend-->>Gateway: Response
    Gateway-->>Client: Project data

Query Flow:

sequenceDiagram
    participant Client
    participant Gateway
    participant Backend
    participant Cache
    participant Database

    Client->>Gateway: GET /api/v1/projects
    Gateway->>Backend: Forward request
    Backend->>Cache: Check cache
    Cache-->>Backend: Cache hit
    Backend-->>Gateway: Cached data
    Gateway-->>Client: Project data

🔒 Security Architecture

Security Layers

Authentication Layer

Identity Service:

# identity-service.yml
service:
  name: identity-service
  version: 1.0.0
  port: 4433

authentication:
  providers:
    - type: oidc
      issuer: https://auth.studio.com
      client_id: ${OIDC_CLIENT_ID}
      client_secret: ${OIDC_CLIENT_SECRET}

    - type: saml
      issuer: https://saml.studio.com
      cert_file: /etc/secrets/saml.crt
      key_file: /etc/secrets/saml.key

  mfa:
    providers:
      - type: totp
        issuer: Studio Platform
        algorithm: SHA256
        digits: 6
        period: 30

      - type: sms
        provider: twilio
        account_sid: ${TWILIO_ACCOUNT_SID}
        auth_token: ${TWILIO_AUTH_TOKEN}

Authorization Layer

Policy Service:

# policy-service.yml
service:
  name: policy-service
  version: 1.0.0
  port: 8181

authorization:
  model: rbac + abac
  policies:
    - name: admin_access
      effect: allow
      actions: ["*"]
      resources: ["*"]
      conditions:
        role: admin

    - name: user_access
      effect: allow
      actions: ["read", "write"]
      resources: ["own_data"]
      conditions:
        user_id: token.user_id

    - name: project_access
      effect: allow
      actions: ["read", "write"]
      resources: ["projects"]
      conditions:
        role: ["manager", "auditor"]
        project_membership: true

Network Security

Security Configuration:

# security.yml
network:
  firewall:
    rules:
      - name: allow_http
        port: 80
        protocol: tcp
        action: allow

      - name: allow_https
        port: 443
        protocol: tcp
        action: allow

      - name: deny_ssh
        port: 22
        protocol: tcp
        action: deny
        source: 0.0.0.0/0

  tls:
    version: "1.3"
    ciphers:
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256
      - TLS_AES_128_GCM_SHA256

    certificates:
      cert_file: /etc/secrets/cert.pem
      key_file: /etc/secrets/key.pem
      ca_file: /etc/secrets/ca.pem

Data Protection

Encryption Architecture

Encryption Configuration:

# encryption.yml
encryption:
  at_rest:
    algorithm: AES-256-GCM
    key_management: vault
    key_rotation: quarterly

  in_transit:
    tls_version: "1.3"
    cipher_suites:
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256
    certificate_validation: strict

  key_management:
    provider: vault
    address: https://vault.studio.com
    token: ${VAULT_TOKEN}

    keys:
      - name: data_encryption
        type: aes256-gcm
        rotation: quarterly

      - name: api_keys
        type: hmac-sha256
        rotation: monthly

🚀 Deployment Architecture

Container Architecture

Docker Configuration

Multi-Stage Dockerfile:

# Backend Dockerfile
FROM node:18-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

FROM node:18-alpine AS runtime

RUN addgroup -g 1001 -S nodejs && \
    adduser -S nodejs -u 1001

WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist

USER nodejs

EXPOSE 4000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:4000/api/health || exit 1

CMD ["npm", "start"]

Docker Compose Configuration

Development Environment:

# docker-compose.dev.yml
version: '3.8'

services:
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
      target: development
    ports:
      - "3000:3000"
    volumes:
      - ./frontend:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
      - WATCHPACK_POLLING=true
    depends_on:
      - backend

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: development
    ports:
      - "4000:4000"
    volumes:
      - ./backend:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://studio:dev@postgres:5432/studio_dev
    depends_on:
      - postgres
      - redis
      - neo4j

  ai-service:
    build:
      context: ./ai-service
      dockerfile: Dockerfile
      target: development
    ports:
      - "5000:5000"
    volumes:
      - ./ai-service:/app
      - /app/venv
    environment:
      - PYTHONPATH=/app
      - GOOGLE_API_KEY=${GOOGLE_API_KEY}
    depends_on:
      - chroma

  postgres:
    image: pgvector/pgvector:pg15
    environment:
      POSTGRES_USER: studio
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: studio_dev
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:latest
    ports:
      - "6379:6379"
    volumes:
      - redis_data:/data

  neo4j:
    image: neo4j:community
    environment:
      NEO4J_AUTH: neo4j/dev
      NEO4J_dbms_memory_heap_initial_size: 512m
      NEO4J_dbms_memory_heap_max_size: 2G
    volumes:
      - neo4j_data:/data
    ports:
      - "7474:7474"
      - "7687:7687"

  chroma:
    image: chromadb/chroma:latest
    ports:
      - "8000:8000"
    volumes:
      - chroma_data:/chroma/chroma

volumes:
  postgres_data:
  redis_data:
  neo4j_data:
  chroma_data:

Kubernetes Architecture

Kubernetes Configuration

Deployment Configuration:

# backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-service
  labels:
    app: backend-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend-service
  template:
    metadata:
      labels:
        app: backend-service
    spec:
      containers:
      - name: backend
        image: studio/backend:latest
        ports:
        - containerPort: 4000
        env:
        - name: NODE_ENV
          value: "production"
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: database-secret
              key: url
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
          limits:
            cpu: 1000m
            memory: 2Gi
        livenessProbe:
          httpGet:
            path: /api/health
            port: 4000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/ready
            port: 4000
          initialDelaySeconds: 5
          periodSeconds: 5

Service Configuration:

# backend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  selector:
    app: backend-service
  ports:
  - protocol: TCP
    port: 4000
    targetPort: 4000
  type: ClusterIP

Ingress Configuration:

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: studio-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-prod
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
  - hosts:
    - api.studio.com
    secretName: studio-tls
  rules:
  - host: api.studio.com
    http:
      paths:
      - path: /api/v1
        pathType: Prefix
        backend:
          service:
            name: backend-service
            port:
              number: 4000
      - path: /api/v1/ai
        pathType: Prefix
        backend:
          service:
            name: ai-service
            port:
              number: 5000

📊 Observability Architecture

Monitoring Stack

Prometheus Configuration

Prometheus Configuration:

# prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  - "rules/*.yml"

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

scrape_configs:
  - job_name: 'studio-services'
    kubernetes_sd_configs:
      - role: pod
    relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__

Grafana Configuration

Dashboard Configuration:

{
  "dashboard": {
    "title": "Studio Platform Overview",
    "panels": [
      {
        "title": "Request Rate",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(http_requests_total[5m])",
            "legendFormat": "{{ service }}"
          }
        ]
      },
      {
        "title": "Response Time",
        "type": "graph",
        "targets": [
          {
            "expr": "histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))",
            "legendFormat": "95th percentile"
          }
        ]
      },
      {
        "title": "Error Rate",
        "type": "graph",
        "targets": [
          {
            "expr": "rate(http_requests_total{status=~\"5..\"}[5m])",
            "legendFormat": "5xx errors"
          }
        ]
      }
    ]
  }
}

Logging Architecture

ELK Stack Configuration

Logstash Configuration:

# logstash.conf
input {
  beats {
    port => 5044
  }
}

filter {
  if [fields][service] == "studio-backend" {
    json {
      source => "message"
    }

    date {
      match => [ "timestamp", "ISO8601" ]
    }

    mutate {
      add_field => { "service" => "studio-backend" }
    }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "studio-logs-%{+YYYY.MM.dd}"
  }
}

✅ Architecture Best Practices

Design Principles

Microservices Best Practices

  • Single Responsibility - Each service has a single responsibility
  • Loose Coupling - Services are loosely coupled
  • High Cohesion - Services are highly cohesive
  • API-First - Design APIs first
  • Fault Tolerance - Design for failure
  • Observability - Make services observable

Data Architecture Best Practices

  • Data Consistency - Ensure data consistency
  • Data Security - Protect data at rest and in transit
  • Data Privacy - Respect data privacy
  • Data Governance - Implement data governance
  • Data Quality - Ensure data quality
  • Data Retention - Implement data retention policies

Common Architecture Mistakes

Avoid These Mistakes: - Not designing for scalability - Not implementing proper security - Not considering fault tolerance - Not implementing proper monitoring - Not designing for maintainability

Follow These Best Practices: - Design for scalability and performance - Implement security by design - Design for fault tolerance and resilience - Implement comprehensive monitoring - Design for maintainability and extensibility


!!! tip Start Small Begin with a simple architecture and evolve it as your needs grow. Don't over-engineer from the start.

!!! note Security First Always prioritize security in architecture decisions. Implement security controls at every layer.

!!! question Need Help? Check our Architecture Support for architecture assistance, or join our developer community.