Shopping Cart

Self-Service Support

Knowledge Base

Everything you need to know about deploying, licensing, customizing, and integrating Analytica Pro into your data stack.

Technical Requirements

What are the minimum server specifications for on-premise deployment?

For a standard on-premise deployment of Analytica Pro, we recommend the following minimum specifications:

  • RAM: 16 GB (32 GB recommended for datasets exceeding 10M rows)
  • CPU: 8-core processor (Intel Xeon or AMD EPYC series preferred)
  • Storage: 256 GB NVMe SSD with a minimum of 500 MB/s sequential read
  • Network: 1 Gbps dedicated bandwidth for real-time streaming workloads

For enterprise clusters handling concurrent users above 500, we recommend horizontal scaling with our Kubernetes orchestration module, which auto-provisions resources based on query load.

Does Analytica Pro support multi-region data sovereignty?

Yes. Analytica Pro supports full multi-region data sovereignty across all major cloud providers:

  • AWS: Deploy to any of 30+ regions with S3-native storage integration
  • Azure: Full compatibility with Azure Government and sovereign clouds
  • GCP: Regional and dual-region bucket configurations supported

Our data residency controls ensure that raw data never leaves the designated region. Processing, caching, and query execution all occur within the boundaries you define. Compliance certificates for GDPR, HIPAA, and SOC 2 Type II are available upon request.

Licensing

Can we scale seat licenses dynamically?

Absolutely. Analytica Pro uses a flexible seat-based licensing model designed for dynamic teams:

  • Admin Dashboard: Add or remove seats instantly from the License Management panel
  • Prorated Billing: New seats are billed on a prorated basis from the activation date to the end of your billing cycle
  • Auto-Scaling: Enable auto-seat provisioning to automatically assign licenses when new users are added via SSO/SCIM
  • Volume Discounts: Tiers unlock at 50, 200, and 500+ seats with progressive discounts up to 35%

Downgrades take effect at the next billing cycle. No penalties, no lock-in contracts.

Customization

Are white-label dashboard exports supported?

Yes. Analytica Pro provides comprehensive white-label capabilities for all exported dashboards:

  • CSS Customization: Override any visual element using our theme editor or a custom CSS stylesheet. Colors, fonts, spacing, and layouts are all fully configurable
  • Logo Injection: Upload your company logo and favicon to replace Analytica Pro branding across all exported reports, PDFs, and embedded views
  • Custom Domains: Serve embedded dashboards from your own subdomain (e.g., analytics.yourcompany.com)
  • Email Templates: Scheduled report emails use your brand colors, logo, and reply-to address

White-label features are available on Professional and Enterprise plans. Contact our team for a branded demo.

AI Services

How is data privacy handled for LLM fine-tuning?

Data privacy is foundational to our AI architecture. We use what we call the Private Context model:

  • Zero Data Retention: Your data is never stored by third-party LLM providers. All API calls use ephemeral sessions with no training data retention
  • On-Premise Fine-Tuning: For customers who require fine-tuned models, training occurs entirely within your infrastructure using our containerized training pipeline
  • Differential Privacy: When aggregate insights are generated, we apply differential privacy techniques to prevent reverse-engineering of individual records
  • Audit Logging: Every AI interaction is logged with full provenance tracking, including the prompt, model version, and response hash

Our Private Context architecture has been independently audited and holds SOC 2 Type II and ISO 27001 certifications.

Which models can I integrate via the Agentic AI platform?

The Agentic AI platform supports a wide range of foundation models through our unified integration layer:

  • OpenAI: GPT-4, GPT-4 Turbo, and GPT-4o for general reasoning and natural language analytics
  • Anthropic: Claude 3 (Opus, Sonnet, Haiku) for long-context document analysis and structured outputs
  • Meta: Llama 3 (8B, 70B, 405B) for on-premise deployments requiring full data sovereignty
  • Mistral: Mistral Large and Mixtral for cost-optimized European-hosted inference

All models are accessed through a single API endpoint with automatic fallback routing. You can set model preferences per workspace, per task type, or let our intelligent router select the optimal model based on query complexity and latency requirements.

Still need architectural help?

Our solutions engineers can walk you through deployment topology, security hardening, and integration strategy in a 1-on-1 session.

Schedule Deep-Dive