Track recent changes and their effects
Use the entity catalog and RCA workbench to track deployments, configuration changes, and scale events. Correlate these changes with performance issues to quickly identify whether a recent change triggered a problem.
When to use this workflow
Use this workflow when:
- A service starts showing errors or latency issues
- You suspect a recent deployment caused problems
- Performance degraded but you’re not sure why
- You need to audit recent changes across services
- You want to correlate configuration changes with incidents
This workflow helps answer “what changed?” during troubleshooting.
Before you begin
Ensure the knowledge graph is capturing change events:
- Kubernetes deployments and rollouts
- ConfigMap and Secret updates
- HPA (Horizontal Pod Autoscaler) scale events
- Service version changes
- Infrastructure configuration changes
These appear as Amend insights.
Find recent changes in the entity catalog
Use Amend insights to identify all recent deployments and configuration changes.
Filter to Amend insights
- Navigate to Observability > Entity catalog.
- Under Insight Rings, select Amend.
- Deselect other insight categories.
This shows only entities with recent changes.
Review change types
Amend insights include:
- Deployment - New service version deployed
- ConfigMap update - Configuration changed
- Secret update - Secrets rotated or modified
- HPA scale - Pods scaled up or down automatically
- Manual scale - Replica count changed manually
- Version change - Service or infrastructure version updated
Check timing
For each entity with an Amend insight:
- Click the entity to open details.
- Note the time when the change occurred.
- Compare with when performance issues started.
If the change happened just before issues started, it’s likely the cause.
Correlate changes with errors
Identify when recent changes trigger performance issues or failures.
Timeline correlation in the entity catalog
- Filter to services with both Amend and Error insights.
- Click a service to view details.
- In the service overview, check if:
- Amend insight (blue) appears first
- Error insights (red/yellow) appear shortly after
- This pattern indicates the change triggered errors.
Use RCA workbench for multi-service analysis
When changes might affect multiple services:
- Navigate to Observability > RCA workbench.
- Add services that have both changes and errors.
- View the Timeline.
- Look for Amend insights (blue) followed by Error insights (red).
Example pattern:
- 10:15 AM - Amend: Deployment on
payment-service - 10:16 AM - Error: Request error rate breach on
payment-service - 10:17 AM - Error: Timeout errors on
checkout-service(calls payment-service)
This shows a deployment causing errors that propagated upstream.
Investigate specific change types
Drill into different kinds of changes to understand their specific impact.
Deployment changes
When a deployment Amend appears:
- Click the service to view details.
- Check Properties tab for:
- New version number
- Deployment time
- Image tag or commit hash.
- Switch to Logs tab:
- Filter to the deployment time
- Look for startup errors or warnings
- Check for configuration issues.
Common deployment issues:
- Missing environment variables
- Database migration failures
- Dependency version incompatibilities
- Incorrect configuration values
Configuration changes
When ConfigMap or Secret updates appear:
- View the Amend insight details.
- Check which configuration changed.
- Review Logs after the change:
- Application reload or restart messages
- Configuration parsing errors
- Connection failures (if database or API credentials changed).
Scale events
When HPA or manual scale events appear:
- Check if scale-up or scale-down.
- Correlate with load patterns:
- Scale up during traffic spikes (expected)
- Scale down during low traffic (expected)
- Rapid scale up/down cycles (potential thrashing).
- Look for issues after scale events:
- New Pods in CrashLoopBackOff
- Pods not becoming ready
- Load balancer not routing to new Pods.
Track changes across the environment
Monitor changes across time ranges and specific areas of your infrastructure.
Filter by time range
To see all changes in a specific time window:
- In the entity catalog or RCA workbench, set the time range.
- Filter to Amend insights only.
- Review all entities that changed in that window.
This is useful for:
- Post-incident review (what changed during the incident?)
- Deployment auditing (what was deployed today?)
- Change freeze verification (were changes made during freeze?)
Filter by namespace or environment
To track changes in specific areas:
- Use the Namespace dropdown to select your namespace.
- Also filter to Amend insights using the insight ring filter.
- See all changes affecting your services.
Useful for team-specific change tracking.
Common change-related patterns
Recognize these typical scenarios where changes trigger problems.
Bad deployment rollout
Pattern: Deployment Amend followed immediately by errors on the same service
Symptoms:
- Error rate spikes within minutes of deployment
- Latency increases on new version
- Pods restarting or crashing
Action:
- Rollback the deployment.
- Check logs from the new version for errors.
- Test the change in a lower environment.
- Fix the issue and redeploy.
Configuration mismatch
Pattern: ConfigMap/Secret update followed by service errors or failures
Symptoms:
- Service can’t connect to database (credentials changed)
- Feature flags cause unexpected behavior
- Pods restarting due to invalid configuration
Action:
- Revert the configuration change.
- Verify configuration values.
- Test configuration in staging before applying to production.
Scale-induced issues
Pattern: HPA scale event followed by Pod failures or error rate increases
Symptoms:
- New Pods fail to start (resource constraints)
- Connection pool exhaustion as Pods scale up
- Race conditions exposed by rapid scaling
Action:
- Check Pod logs for startup failures.
- Review resource requests and limits.
- Adjust HPA thresholds or resource allocations.
- Investigate application concurrency issues.
Deployment issues
Pattern: Deployment on one service causes errors on multiple upstream services
Symptoms:
- Service A deployed (Amend)
- Service A shows errors (breaking change)
- Services B, C, D that call Service A show timeout errors
Action:
- Rollback Service A deployment.
- Review API compatibility and breaking changes.
- Coordinate deployments with dependent services.
- Use feature flags or API versioning for safer rollouts.
Use Amend insights for proactive monitoring
Establish regular practices to catch change-related issues early.
Regular change review
Establish a practice of reviewing changes:
Daily:
- Filter the entity catalog to Amend insights from last 24 hours.
- Review what changed in production.
- Cross-reference with error or latency increases.
- Flag suspicious correlations for investigation.
After incidents:
- Check RCA workbench timeline for Amend insights.
- Identify what changed before or during the incident.
- Document changes in post-mortem.
- Add safeguards to prevent similar issues.
Deployment validation
After deploying a service:
- Filter the entity catalog to the deployed service.
- Check for Amend insight confirming deployment.
- Monitor for Error, Anomaly, or Saturation insights appearing after.
- If errors appear, investigate immediately before the change propagates.
Change correlation dashboard
Create a bookmarked view:
- Filter to Amend and Error insights.
- Set time range to last 4 hours (capture recent changes).
- Bookmark as “Recent Changes and Errors”.
- Check regularly to spot change-related issues early.
Combine with other workflows
Integrate change tracking with other knowledge graph features for comprehensive analysis.
For incidents
- Use investigate incidents in RCA workbench.
- Look for Amend insights on the timeline.
- Identify if a change triggered the incident.
- Use explore dependencies to see impact.
For proactive monitoring
- Use monitor services with Amend filter.
- Watch for deployments on critical services.
- Validate health after each deployment.
- Catch issues before they escalate.
Best practices
Follow these practices to minimize change-related incidents and improve monitoring.
Change management
- Deploy during low-traffic periods for critical services
- Monitor for 30 minutes after deploying to catch early issues
- Use canary or blue-green deployments to limit blast radius
- Coordinate multi-service changes to avoid breaking dependencies
Monitor changes
- Set up alerts for Error insights appearing shortly after Amend insights
- Review change history during incident post-mortems
- Track change frequency to identify teams or services with frequent deployments
- Document problematic changes in runbooks for faster future diagnosis
Rollback readiness
- Have rollback procedures documented for each service
- Test rollback in staging environments
- Automate rollback where possible (feature flags, deployment automation)
- Monitor after rollback to ensure it resolved the issue
Next steps
When you identify a problematic change:
- Immediate: Rollback the change or apply a hot fix
- Short-term: Investigate logs and traces to understand what went wrong
- Long-term: Improve testing, add monitoring, or adjust deployment practices
Related workflows
- Investigate incidents - Use Amend insights during root cause analysis
- Monitor services - Watch for issues after deployments
- Explore dependencies - Understand which services are affected by changes
Additional resources
- RCA workbench - Correlate changes with errors on a timeline
- Entity catalog - Filter and track changes across services



