Understand attribution labels
Effective cost attribution depends on selecting appropriate labels from your telemetry data. Understanding label characteristics, cardinality limits, and common labeling patterns helps you optimize your cost attribution strategy.
Label selection criteria
When choosing labels for cost attribution, consider the following factors:
Coverage
Select labels that are applied to the majority of your telemetry data. High coverage ensures that most of your usage can be properly attributed rather than classified as unattributed.
Examples of high-coverage labels:
service_name
orservice
- Applied to most application metrics, logs, and tracesenvironment
- Used across development, staging, and production telemetryteam
orsquad
- Consistently applied by different development teams
Granularity
Choose labels that provide the right level of detail for your cost allocation needs without exceeding cardinality limits.
Appropriate granularity examples:
- Team-level attribution:
team=frontend
,team=backend
,team=platform
- Service-level attribution:
service=user-api
,service=payment-service
,service=notification-service
- Environment-level attribution:
environment=production
,environment=staging
,environment=development
Stability
Select labels with stable values that don’t change frequently. Unstable labels can create inconsistent attribution over time.
Stable label examples:
- Service names that remain consistent across deployments
- Team names that don’t change frequently
- Environment classifications
Avoid unstable labels:
- Pod names or container IDs that change with each deployment
- Version numbers that change with each release
- Temporary identifiers
Cardinality limits
Cost attribution has specific cardinality limits to ensure performance and manageability:
- Maximum labels: 2 labels can be configured for cost attribution
- Combined cardinality: Maximum of 1,000 unique combinations across both labels
- Enforcement: These limits are enforced at configuration time
Calculate cardinality
To estimate cardinality for your labels:
Single label cardinality: Count the unique values for each label
- Example:
team
label with valuesfrontend
,backend
,platform
= 3 unique values
- Example:
Combined cardinality: Multiply the unique values of both labels
- Example:
team
(3 values) ×environment
(3 values) = 9 total combinations
- Example:
Optimize cardinality
If your labels exceed the cardinality limit, consider these strategies:
Reduce label values:
- Consolidate similar teams or services
- Use broader categories instead of specific identifiers
- Group low-usage services together
Choose different labels:
- Select labels with fewer unique values
- Use hierarchical labels (for example,
business_unit
instead ofteam
)
Common attribution labels
Team-based attribution
Allocate costs based on team ownership:
Primary label options:
team
,squad
,group
business_unit
,department
cost_center
Secondary label options:
environment
(production, staging, development)service_type
(frontend, backend, database)
Service-based attribution
Allocate costs based on service ownership:
Primary label options:
service_name
,service
,application
service_tier
(critical, standard, experimental)
Secondary label options:
team
(owning team)environment
Environment-based attribution
Allocate costs based on environment usage:
Primary label options:
environment
,env
,stage
cluster
,region
Secondary label options:
service_name
team
Unattributed telemetry
Telemetry data that doesn’t include your configured attribution labels is classified as unattributed. Understanding and minimizing unattributed data improves the accuracy of your cost allocation.
Causes of unattributed data
Missing labels:
- Telemetry sent without the required attribution labels
- Services that haven’t implemented consistent labeling
- Legacy systems with different labeling conventions
Incorrect label names:
- Slight variations in label names (for example,
team
vsteams
) - Case sensitivity differences (for example,
Team
vsteam
) - Typos in label names
Infrastructure telemetry:
- System-level metrics that may not include application labels
- Infrastructure monitoring that uses different labeling schemes
Monitor unattributed data
Track unattributed data to identify labeling gaps:
- Review the attribution overview for unattributed percentages
- Identify services or teams with high unattributed costs
- Investigate labeling inconsistencies in your telemetry pipeline
Reduce unattributed data
Implement consistent labeling:
- Establish labeling standards across your organization
- Use configuration management to ensure consistent label application
- Implement validation in your telemetry pipeline
Update telemetry collection:
- Configure instrumentation libraries to include attribution labels
- Update manual instrumentation to apply required labels
- Review and update legacy systems
Use telemetry transformation:
- Apply labels during telemetry processing
- Use Grafana Alloy or OpenTelemetry Collector processors to add missing labels
- Implement label mapping for different naming conventions
Best practices
Label naming conventions
- Use consistent naming patterns across your organization
- Choose descriptive but concise label names
- Avoid special characters or spaces in label values
- Use lowercase for consistency
Attribution strategy
- Start with broader categories and refine over time
- Document your attribution strategy for team reference
- Review attribution effectiveness regularly
- Adjust labels based on organizational changes
Maintenance and monitoring
- Set up alerts for high unattributed percentages
- Review attribution accuracy monthly
- Update labeling as services and teams evolve
- Coordinate with teams to maintain labeling standards