Available Fields
All logs
Absolute time range
Relative Range...
Available Fields & Values
Attribute & Values
Matching Logs
Log count
| Name | Avg | Sum |
|---|---|---|
| No data yet | ||
| Severity | Time | Resource | Attributes | |
|---|---|---|---|---|
|
No logs to display Enter search criteria to find logs |
||||
Explore Metrics
Build Query
Metrics
| Metric | % | Labels |
|---|---|---|
|
Loading metrics... |
||
Metric Details
Metrics
Metric Values
Data Points
| Timestamp | Value | |
|---|---|---|
|
No metrics to display Enter a metric name and click Query |
||
Documentation
Learn how to get started, create organizations, integrate your apps, and use our observability platform
Logs
Log Count Over Time
Logs by Severity
Recent Logs
| Timestamp | Severity | Service | Message |
|---|---|---|---|
| Loading... | |||
Metrics
CPU Usage
Memory Usage
Network I/O
Traces
Trace Count Over Time
Average Trace Duration
Error Rate
Documentation
Complete guide to using Arqive's observability platform
Getting Started
Welcome to Arqive! This documentation will guide you through setting up and using our observability platform. Arqive is built on OpenTelemetry and ClickHouse, providing you with fast, scalable observability for your applications.
Follow the steps below to get started:
- Sign up for an account
- Create or join an organization
- Create an API key for your applications
- Integrate your apps to send logs, metrics, and traces
- Start exploring your observability data
Sign Up
To get started with Arqive, you need to create an account. We use Auth0 for secure authentication.
Note: If you're accessing Arqive for the first time, you'll be prompted to sign up during the login process.
Steps to Sign Up:
- Click the "Sign in" or "Start Free Trial" button on the landing page
- You'll be redirected to Auth0 for authentication
- If you don't have an account, click "Sign up" on the Auth0 page
- Enter your email address and create a password
- Complete the email verification if required
- Once authenticated, you'll be redirected back to Arqive
After signing up, you'll be prompted to create or select an organization. Organizations allow you to group your team members and manage access to your observability data.
Organizations
Organizations are the primary way to organize your team and data in Arqive. Each organization has its own API keys, members, and observability data.
Creating an Organization
When you first sign in, you'll be prompted to create an organization. You can also create additional organizations later from the organization dropdown in the sidebar.
To create an organization:
- Click on the organization dropdown in the top-left of the sidebar
- Click "Create organization"
- Enter a name for your organization
- Click "Create"
Once created, you'll automatically be set as a member of the organization and can start creating API keys and inviting team members.
Switching Organizations
If you're a member of multiple organizations, you can switch between them using the organization dropdown in the sidebar. All your data, API keys, and settings are scoped to the currently selected organization.
Inviting Members to Your Organization
You can invite team members to join your organization, allowing them to access the organization's data, create API keys, and manage settings.
How to Invite Members:
- Click on the organization dropdown in the sidebar
- Click "Invite user"
- Enter the email address of the person you want to invite
- Click "Send Invitation"
Important: The invited user must have an Arqive account. If they don't have one, they should sign up first before accepting the invitation.
Accepting Invitations
When you receive an invitation, you'll see a notification in the organization dropdown. Click on the invitation to accept it and join the organization.
API Keys
API keys are used to authenticate your applications when sending logs, metrics, and traces to Arqive. Each API key is associated with an organization and can be used to send data to all ingestion endpoints.
Security Warning: API keys provide full access to send data to your organization. Keep them secure and never commit them to version control. If a key is compromised, revoke it immediately.
Creating an API Key
To create an API key for your applications:
- Navigate to Settings in the sidebar
- Go to the "API Keys" section
- Click "Create API Key"
- Enter a descriptive name for the key (e.g., "Production App", "Development Environment")
- Optionally set an expiration date (leave blank for keys that never expire)
- Click "Create"
Important: Copy the API key immediately after creation. It will not be shown again for security reasons. If you lose it, you'll need to create a new key.
Using Your API Key
When you create an API key, you'll receive the following information:
- API Key: The secret key to use for authentication
- Logs Endpoint:
/v1/logs - Metrics Endpoint:
/v1/metrics - Traces Endpoint:
/v1/traces
Include the API key in the X-API-Key header when making
requests to these endpoints.
Integrating Your Applications
Arqive accepts observability data in OpenTelemetry format. You can send logs, metrics, and traces using standard OpenTelemetry protocols (OTLP over HTTP).
All endpoints require authentication using your API key in the X-API-Key header.
Logs Integration
Send logs to Arqive using the OpenTelemetry Logs Protocol (OTLP).
Endpoint
Headers
Content-Type: application/json
X-API-Key: your-api-key-here
Request Body Format
The request body should follow the OpenTelemetry Logs Protocol format:
{
"resourceLogs": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "my-service"}},
{"key": "service.version", "value": {"stringValue": "1.0.0"}}
]
},
"scopeLogs": [
{
"scope": {
"name": "my-logger"
},
"logRecords": [
{
"timeUnixNano": "1234567890000000000",
"severityText": "INFO",
"body": {
"stringValue": "Log message here"
},
"attributes": [
{"key": "log.level", "value": {"stringValue": "info"}}
]
}
]
}
]
}
]
}
Example: Using OpenTelemetry SDK
For Python applications, you can use the OpenTelemetry Python SDK:
from opentelemetry import logs
from opentelemetry.exporter.otlp.proto.http.log_exporter import OTLPLogExporter
from opentelemetry.sdk.logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk.logs.export import BatchLogRecordProcessor
# Configure exporter
exporter = OTLPLogExporter(
endpoint="https://your-arqive-instance.com/v1/logs",
headers={"X-API-Key": "your-api-key"}
)
# Setup logger provider
logger_provider = LoggerProvider()
logger_provider.add_log_record_processor(
BatchLogRecordProcessor(exporter)
)
logs.set_logger_provider(logger_provider)
# Use the logger
logger = logs.get_logger(__name__)
logger.info("This is a log message")
Metrics Integration
Send metrics to Arqive using the OpenTelemetry Metrics Protocol (OTLP).
Endpoint
Headers
Content-Type: application/json
X-API-Key: your-api-key-here
Request Body Format
The request body should follow the OpenTelemetry Metrics Protocol format:
{
"resourceMetrics": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "my-service"}}
]
},
"scopeMetrics": [
{
"scope": {
"name": "my-metrics"
},
"metrics": [
{
"name": "request_count",
"description": "Number of requests",
"unit": "1",
"sum": {
"dataPoints": [
{
"asInt": "100",
"timeUnixNano": "1234567890000000000",
"attributes": [
{"key": "method", "value": {"stringValue": "GET"}}
]
}
],
"aggregationTemporality": 2,
"isMonotonic": true
}
}
]
}
]
}
]
}
Example: Using Prometheus Remote Write
You can also use Prometheus Remote Write format. Configure your Prometheus instance:
# prometheus.yml
remote_write:
- url: https://your-arqive-instance.com/v1/metrics
headers:
X-API-Key: your-api-key-here
Traces Integration
Send traces to Arqive using the OpenTelemetry Traces Protocol (OTLP).
Endpoint
Headers
Content-Type: application/json
X-API-Key: your-api-key-here
Request Body Format
The request body should follow the OpenTelemetry Traces Protocol format:
{
"resourceSpans": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "my-service"}}
]
},
"scopeSpans": [
{
"scope": {
"name": "my-tracer"
},
"spans": [
{
"traceId": "0123456789abcdef0123456789abcdef",
"spanId": "0123456789abcdef",
"name": "operation-name",
"kind": 1,
"startTimeUnixNano": "1234567890000000000",
"endTimeUnixNano": "1234567891000000000",
"attributes": [
{"key": "http.method", "value": {"stringValue": "GET"}}
],
"status": {
"code": 1
}
}
]
}
]
}
]
}
Example: Using OpenTelemetry SDK
For Python applications:
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Configure exporter
exporter = OTLPSpanExporter(
endpoint="https://your-arqive-instance.com/v1/traces",
headers={"X-API-Key": "your-api-key"}
)
# Setup tracer provider
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(tracer_provider)
# Use the tracer
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("operation-name") as span:
span.set_attribute("key", "value")
# Your code here
Protocols & Schemas
Arqive uses standard OpenTelemetry protocols and schemas. This ensures compatibility with a wide range of observability tools and libraries.
Supported Protocols
- OTLP (OpenTelemetry Protocol): Native protocol for logs, metrics, and traces
- Prometheus Remote Write: For metrics ingestion from Prometheus
- HTTP/JSON: All endpoints accept JSON payloads over HTTP
Expected Schemas
Arqive expects data in OpenTelemetry format. Key attributes include:
Resource Attributes (Recommended):
service.name- Name of your serviceservice.version- Version of your serviceservice.namespace- Namespace/environment (e.g., "production", "staging")deployment.environment- Deployment environment
Log Attributes:
log.level- Log level (DEBUG, INFO, WARN, ERROR)severityText- Severity as textseverityNumber- Severity as number (1-24)
Trace Attributes:
http.method- HTTP methodhttp.status_code- HTTP status codehttp.url- Request URLdb.system- Database system namedb.operation- Database operation
For complete schema documentation, refer to the OpenTelemetry Specification.
Using the Observability Platform
Once you've integrated your applications, you can use Arqive's platform to explore and analyze your observability data.
Logs
The Logs view allows you to search, filter, and analyze your log data. You can:
- Search logs by text, service, or severity
- Filter by time range
- View detailed log entries with all attributes
- Export log data for analysis
Metrics
The Metrics view provides tools to explore your metrics data:
- Query metrics by name
- Visualize metrics over time
- Group metrics by labels/attributes
- Export metric data
Traces
The Traces view helps you understand request flows and performance:
- View trace timelines and spans
- Filter traces by status, latency, or service
- Analyze trace duration and errors
- Drill down into individual spans
Dashboards
Create custom dashboards to visualize your observability data:
- Build custom visualizations
- Combine logs, metrics, and traces
- Share dashboards with your team
- Set up alerts based on dashboard metrics
Home Dashboard
The Home view provides an overview of your observability data with:
- Log count and severity distribution
- System metrics (CPU, memory, request rate)
- Trace statistics and error rates
- Recent log entries
Unable to Load Perses
Perses dashboard service may not be running or accessible.
• Check that Perses is running: docker ps | grep perses
• View Perses logs: docker logs observability_perses
• Access Perses directly: /dashboards
Absolute time range
Relative Range...
Advanced Filters
| STATUS | TRACE NAME | TRACE ID | DURATION | SPANS | TIME |
|---|---|---|---|---|---|
| Loading traces... | |||||
Span Details
Alerts
| Name | Type | Status | Condition | Last Triggered | Actions |
|---|---|---|---|---|---|
|
No alerts configured Create your first alert to get started |
|||||
Integrations
Connect your data sources to send telemetry data to Arqive
API Keys
Create API keys to authenticate your applications when sending data to Arqive
Organization Information
Loading...
You may need your Organization ID for some integrations or API calls
Prometheus
Metrics
Send metrics from Prometheus to Arqive using remote write or exporters.
OpenTelemetry
Traces, Metrics, Logs
Native OpenTelemetry support for traces, metrics, and logs via OTLP.
FluentD
Logs
Forward logs from FluentD to Arqive using HTTP output plugin.
Filebeat
Logs
Ship logs from Filebeat to Arqive using HTTP output or Elasticsearch output.
Settings
Manage your account and organization settings
Profile
Organizations
Switch organizations from the header menu.
Leave Organization
Are you sure you want to leave ? You will lose access to all data in this organization.
This action cannot be undone.
Organization Members
Clear All Organization Data
Are you sure you want to clear all data for ?
This will permanently delete all logs, metrics, traces, and usage data. This action cannot be undone.
The organization will remain, but all observability data will be removed.
Delete Organization
Are you sure you want to delete ? This will permanently delete the organization and all its data.
This action cannot be undone. All data, logs, metrics, traces, and API keys will be permanently deleted.