Create an organization

Built on OpenTelemetry, Prometheus and Perses. Made for you.

or

Invite User to Organization

Create Organization

Arqive

Home Home

Telemetry

Logs Logs Metrics Metrics Traces Traces

Monitoring

Dashboards Dashboards Alerts Alerts

Integrations

Integrations Integrations

Account

Settings Settings
JD

Explore logs

Alert Notifications

Loading notifications...

Organization Invitations

Available Fields

Loading fields...

All logs

Showing 0 log records
0 error & fatal
0 warn

Absolute time range

Relative Range...

Filter builder
ENTER Complete ESC Close

Available Fields & Values

Attribute & Values

Matching Logs

Log count

-- max
ERROR & FATAL
WARN
INFO
TRACE & DEBUG
UNKNOWN
Name Avg Sum
No data yet
Severity Time Resource Attributes

No logs to display

Enter search criteria to find logs

Explore Metrics

Build Query

Metrics

Metric % Labels

Loading metrics...

Metric Details

Metrics

0.0 - 1.0
Query:

Metric Values

Data Points

Timestamp Value Group

No metrics to display

Enter a metric name and click Query

Documentation

Learn how to get started, create organizations, integrate your apps, and use our observability platform

View Documentation →

Logs

Log Count Over Time

Logs by Severity

Recent Logs

Timestamp Severity Service Message
Loading...

Metrics

CPU Usage

Memory Usage

Network I/O

Traces

Trace Count Over Time

Average Trace Duration

Error Rate

Documentation

Complete guide to using Arqive's observability platform

Getting Started Sign Up Organizations Invite Members API Keys Create API Key Integrations Logs Integration Metrics Integration Traces Integration Protocols & Schemas Using the Platform

Getting Started

Welcome to Arqive! This documentation will guide you through setting up and using our observability platform. Arqive is built on OpenTelemetry and ClickHouse, providing you with fast, scalable observability for your applications.

Follow the steps below to get started:

  1. Sign up for an account
  2. Create or join an organization
  3. Create an API key for your applications
  4. Integrate your apps to send logs, metrics, and traces
  5. Start exploring your observability data

Sign Up

To get started with Arqive, you need to create an account. We use Auth0 for secure authentication.

Note: If you're accessing Arqive for the first time, you'll be prompted to sign up during the login process.

Steps to Sign Up:

  1. Click the "Sign in" or "Start Free Trial" button on the landing page
  2. You'll be redirected to Auth0 for authentication
  3. If you don't have an account, click "Sign up" on the Auth0 page
  4. Enter your email address and create a password
  5. Complete the email verification if required
  6. Once authenticated, you'll be redirected back to Arqive

After signing up, you'll be prompted to create or select an organization. Organizations allow you to group your team members and manage access to your observability data.

Organizations

Organizations are the primary way to organize your team and data in Arqive. Each organization has its own API keys, members, and observability data.

Creating an Organization

When you first sign in, you'll be prompted to create an organization. You can also create additional organizations later from the organization dropdown in the sidebar.

To create an organization:

  1. Click on the organization dropdown in the top-left of the sidebar
  2. Click "Create organization"
  3. Enter a name for your organization
  4. Click "Create"

Once created, you'll automatically be set as a member of the organization and can start creating API keys and inviting team members.

Switching Organizations

If you're a member of multiple organizations, you can switch between them using the organization dropdown in the sidebar. All your data, API keys, and settings are scoped to the currently selected organization.

Inviting Members to Your Organization

You can invite team members to join your organization, allowing them to access the organization's data, create API keys, and manage settings.

How to Invite Members:

  1. Click on the organization dropdown in the sidebar
  2. Click "Invite user"
  3. Enter the email address of the person you want to invite
  4. Click "Send Invitation"

Important: The invited user must have an Arqive account. If they don't have one, they should sign up first before accepting the invitation.

Accepting Invitations

When you receive an invitation, you'll see a notification in the organization dropdown. Click on the invitation to accept it and join the organization.

API Keys

API keys are used to authenticate your applications when sending logs, metrics, and traces to Arqive. Each API key is associated with an organization and can be used to send data to all ingestion endpoints.

Security Warning: API keys provide full access to send data to your organization. Keep them secure and never commit them to version control. If a key is compromised, revoke it immediately.

Creating an API Key

To create an API key for your applications:

  1. Navigate to Settings in the sidebar
  2. Go to the "API Keys" section
  3. Click "Create API Key"
  4. Enter a descriptive name for the key (e.g., "Production App", "Development Environment")
  5. Optionally set an expiration date (leave blank for keys that never expire)
  6. Click "Create"

Important: Copy the API key immediately after creation. It will not be shown again for security reasons. If you lose it, you'll need to create a new key.

Using Your API Key

When you create an API key, you'll receive the following information:

  • API Key: The secret key to use for authentication
  • Logs Endpoint: /v1/logs
  • Metrics Endpoint: /v1/metrics
  • Traces Endpoint: /v1/traces

Include the API key in the X-API-Key header when making requests to these endpoints.

Integrating Your Applications

Arqive accepts observability data in OpenTelemetry format. You can send logs, metrics, and traces using standard OpenTelemetry protocols (OTLP over HTTP).

All endpoints require authentication using your API key in the X-API-Key header.

Logs Integration

Send logs to Arqive using the OpenTelemetry Logs Protocol (OTLP).

Endpoint

POST /v1/logs

Headers

Content-Type: application/json
X-API-Key: your-api-key-here

Request Body Format

The request body should follow the OpenTelemetry Logs Protocol format:

{
  "resourceLogs": [
    {
      "resource": {
        "attributes": [
          {"key": "service.name", "value": {"stringValue": "my-service"}},
          {"key": "service.version", "value": {"stringValue": "1.0.0"}}
        ]
      },
      "scopeLogs": [
        {
          "scope": {
            "name": "my-logger"
          },
          "logRecords": [
            {
              "timeUnixNano": "1234567890000000000",
              "severityText": "INFO",
              "body": {
                "stringValue": "Log message here"
              },
              "attributes": [
                {"key": "log.level", "value": {"stringValue": "info"}}
              ]
            }
          ]
        }
      ]
    }
  ]
}

Example: Using OpenTelemetry SDK

For Python applications, you can use the OpenTelemetry Python SDK:

from opentelemetry import logs
from opentelemetry.exporter.otlp.proto.http.log_exporter import OTLPLogExporter
from opentelemetry.sdk.logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk.logs.export import BatchLogRecordProcessor

# Configure exporter
exporter = OTLPLogExporter(
    endpoint="https://your-arqive-instance.com/v1/logs",
    headers={"X-API-Key": "your-api-key"}
)

# Setup logger provider
logger_provider = LoggerProvider()
logger_provider.add_log_record_processor(
    BatchLogRecordProcessor(exporter)
)
logs.set_logger_provider(logger_provider)

# Use the logger
logger = logs.get_logger(__name__)
logger.info("This is a log message")

Metrics Integration

Send metrics to Arqive using the OpenTelemetry Metrics Protocol (OTLP).

Endpoint

POST /v1/metrics

Headers

Content-Type: application/json
X-API-Key: your-api-key-here

Request Body Format

The request body should follow the OpenTelemetry Metrics Protocol format:

{
  "resourceMetrics": [
    {
      "resource": {
        "attributes": [
          {"key": "service.name", "value": {"stringValue": "my-service"}}
        ]
      },
      "scopeMetrics": [
        {
          "scope": {
            "name": "my-metrics"
          },
          "metrics": [
            {
              "name": "request_count",
              "description": "Number of requests",
              "unit": "1",
              "sum": {
                "dataPoints": [
                  {
                    "asInt": "100",
                    "timeUnixNano": "1234567890000000000",
                    "attributes": [
                      {"key": "method", "value": {"stringValue": "GET"}}
                    ]
                  }
                ],
                "aggregationTemporality": 2,
                "isMonotonic": true
              }
            }
          ]
        }
      ]
    }
  ]
}

Example: Using Prometheus Remote Write

You can also use Prometheus Remote Write format. Configure your Prometheus instance:

# prometheus.yml
remote_write:
  - url: https://your-arqive-instance.com/v1/metrics
    headers:
      X-API-Key: your-api-key-here

Traces Integration

Send traces to Arqive using the OpenTelemetry Traces Protocol (OTLP).

Endpoint

POST /v1/traces

Headers

Content-Type: application/json
X-API-Key: your-api-key-here

Request Body Format

The request body should follow the OpenTelemetry Traces Protocol format:

{
  "resourceSpans": [
    {
      "resource": {
        "attributes": [
          {"key": "service.name", "value": {"stringValue": "my-service"}}
        ]
      },
      "scopeSpans": [
        {
          "scope": {
            "name": "my-tracer"
          },
          "spans": [
            {
              "traceId": "0123456789abcdef0123456789abcdef",
              "spanId": "0123456789abcdef",
              "name": "operation-name",
              "kind": 1,
              "startTimeUnixNano": "1234567890000000000",
              "endTimeUnixNano": "1234567891000000000",
              "attributes": [
                {"key": "http.method", "value": {"stringValue": "GET"}}
              ],
              "status": {
                "code": 1
              }
            }
          ]
        }
      ]
    }
  ]
}

Example: Using OpenTelemetry SDK

For Python applications:

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Configure exporter
exporter = OTLPSpanExporter(
    endpoint="https://your-arqive-instance.com/v1/traces",
    headers={"X-API-Key": "your-api-key"}
)

# Setup tracer provider
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(tracer_provider)

# Use the tracer
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("operation-name") as span:
    span.set_attribute("key", "value")
    # Your code here

Protocols & Schemas

Arqive uses standard OpenTelemetry protocols and schemas. This ensures compatibility with a wide range of observability tools and libraries.

Supported Protocols

  • OTLP (OpenTelemetry Protocol): Native protocol for logs, metrics, and traces
  • Prometheus Remote Write: For metrics ingestion from Prometheus
  • HTTP/JSON: All endpoints accept JSON payloads over HTTP

Expected Schemas

Arqive expects data in OpenTelemetry format. Key attributes include:

Resource Attributes (Recommended):

  • service.name - Name of your service
  • service.version - Version of your service
  • service.namespace - Namespace/environment (e.g., "production", "staging")
  • deployment.environment - Deployment environment

Log Attributes:

  • log.level - Log level (DEBUG, INFO, WARN, ERROR)
  • severityText - Severity as text
  • severityNumber - Severity as number (1-24)

Trace Attributes:

  • http.method - HTTP method
  • http.status_code - HTTP status code
  • http.url - Request URL
  • db.system - Database system name
  • db.operation - Database operation

For complete schema documentation, refer to the OpenTelemetry Specification.

Using the Observability Platform

Once you've integrated your applications, you can use Arqive's platform to explore and analyze your observability data.

Logs

The Logs view allows you to search, filter, and analyze your log data. You can:

  • Search logs by text, service, or severity
  • Filter by time range
  • View detailed log entries with all attributes
  • Export log data for analysis

Metrics

The Metrics view provides tools to explore your metrics data:

  • Query metrics by name
  • Visualize metrics over time
  • Group metrics by labels/attributes
  • Export metric data

Traces

The Traces view helps you understand request flows and performance:

  • View trace timelines and spans
  • Filter traces by status, latency, or service
  • Analyze trace duration and errors
  • Drill down into individual spans

Dashboards

Create custom dashboards to visualize your observability data:

  • Build custom visualizations
  • Combine logs, metrics, and traces
  • Share dashboards with your team
  • Set up alerts based on dashboard metrics

Home Dashboard

The Home view provides an overview of your observability data with:

  • Log count and severity distribution
  • System metrics (CPU, memory, request rate)
  • Trace statistics and error rates
  • Recent log entries

Loading Perses Dashboards...

This may take a few seconds

Unable to Load Perses

Perses dashboard service may not be running or accessible.

• Check that Perses is running: docker ps | grep perses

• View Perses logs: docker logs observability_perses

• Access Perses directly: /dashboards

Traces 0 traces found

Absolute time range

Relative Range...

Advanced Filters

-
-
Active Filters:
STATUS TRACE NAME TRACE ID DURATION SPANS TIME
Loading traces...

Duration: • Spans: • Trace ID: •

Span Details

Alerts

Name Type Status Condition Last Triggered Actions

No alerts configured

Create your first alert to get started

Integrations

Connect your data sources to send telemetry data to Arqive

API Keys

Create API keys to authenticate your applications when sending data to Arqive

Organization Information

Organization ID: Loading...
Organization Name: Loading...

You may need your Organization ID for some integrations or API calls

Loading API keys...

Prometheus

Metrics

Send metrics from Prometheus to Arqive using remote write or exporters.

View Integration Guide

OpenTelemetry

Traces, Metrics, Logs

Native OpenTelemetry support for traces, metrics, and logs via OTLP.

View Integration Guide

FluentD

Logs

Forward logs from FluentD to Arqive using HTTP output plugin.

View Integration Guide

Filebeat

Logs

Ship logs from Filebeat to Arqive using HTTP output or Elasticsearch output.

View Integration Guide

Settings

Manage your account and organization settings

Profile

Loading...
Loading...
Loading...

Organizations

Switch organizations from the header menu.

Loading organizations...

Storage used (last 30 days)

--

Select an organization

Waiting for data...

Last updated

--

Waiting for data...

Avg daily storage

--

Based on selected period

Peak storage day

--

---

Latest day

--

---

Storage is calculated from log bytes ingested per day. Keep ingested data lean by filtering noisy sources before sending them to Arqive.

Clear All Data

Permanently delete all logs, metrics, traces, and usage data for this organization.

This action cannot be undone.

Leave Organization

Are you sure you want to leave ? You will lose access to all data in this organization.

This action cannot be undone.

Organization Members

Loading members...

Clear All Organization Data

Are you sure you want to clear all data for ?

This will permanently delete all logs, metrics, traces, and usage data. This action cannot be undone.

The organization will remain, but all observability data will be removed.

Delete Organization

Are you sure you want to delete ? This will permanently delete the organization and all its data.

This action cannot be undone. All data, logs, metrics, traces, and API keys will be permanently deleted.

Create Alert

Alert Type

Select the type of alert you want to create

Logs Threshold

Alert when log count exceeds threshold in time window

Logs Threshold

Logs Immediate

Alert immediately when matching logs appear

Logs Immediate

Metric Threshold

Alert when metric value exceeds threshold

Metrics Metric - Anomaly

Target logs by application, subsystem(s) or the text contained within the logs.

Conditions

Define one or more conditions to evaluate the data. Alert will be created according to the highest priority matching condition.

When the number of logs within 10 Minutes is more than 1 trigger a P1 alert.