SLA Compliance Dashboard
Track GitHub's 99.9% uptime commitment for each service by calendar quarter. Each service is measured independently against the SLA target. Data sourced from the GitHub Status API.
Total Incidents
Major Impacts
Quarterly SLA Overview
2026-Q2
2026-Q1
SLA Violation2025-Q4
SLA Violation2025-Q3
SLA Violation2025-Q2
SLA Violation2025-Q1
SLA Violation2024-Q4
SLA Violation2024-Q3
SLA ViolationRecent Incidents
GitHub audit logs are unavailable
3 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery.
We are investigating reports of impacted performance for some GitHub services.
Disruption with GitHub's code search
7 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Code search has recovered and is serving production traffic.
We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.
We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.
We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.
We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.
We are investigating reports of impacted performance for some GitHub services.
Incident with Copilot
9 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery
The degradation has been mitigated. We are monitoring to ensure stability.
The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.
The degradation has been mitigated. We are monitoring to ensure stability.
The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.
Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.
We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded performance for Copilot
Incident with Pull Requests: High percentage of 500s
11 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability.
We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests.
Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations.
We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes.
We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.
We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.
We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.
We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.
We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.
We are investigating reports of degraded performance for Pull Requests
Issues with metered billing report generation
7 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
The degradation has been mitigated. We are monitoring to ensure stability.
We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery.
We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover.
We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports.
We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations.
We are investigating reports of impacted performance for some GitHub services.