2025-Q2
Apr 1, 2025 - Jun 30, 2025
Total Downtime
Total Incidents
Worst Component
Service Features
Time-based uptime calculation for the 131,040 minutes in this quarter
Downtime Definition: Minutes with >5% error rate (approximated from incident data)
| Component | Uptime % | Downtime | Incidents | Status | Service Credit |
|---|---|---|---|---|---|
| Git Operations | 99.9182% | 1h 47m | 3 | Pass | None |
| API Requests | 99.8811% | 2h 36m | 5 | Violation | 10% |
| Issues | 99.8422% | 3h 27m | 6 | Violation | 10% |
| Pull Requests | 99.8771% | 2h 41m | 9 | Violation | 10% |
| Webhooks | 99.9771% | 30m | 1 | Pass | None |
| Pages | 99.9016% | 2h 9m | 3 | Pass | None |
Actions
Execution-based calculation (workflow success rate)
| Component | Uptime % | Downtime | Incidents |
|---|---|---|---|
| Actions | 99.6774% | 7h 3m | 10 |
Packages
Hybrid calculation with two separate metrics
1. Package Transfers: (Total transfers - Failed transfers) / Total transfers × 100
2. Package Storage: (Total minutes - Minutes with >5% error rate) / Total minutes × 100
| Component | Uptime % | Downtime | Incidents |
|---|---|---|---|
| Packages | 99.9664% | 44m | 1 |
Incidents in 2025-Q2
55 incidents occurred during this quarter
Disruption with Claude 3.7 Sonnet in Copilot Chat
4 updates
The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].We will continue monitoring to ensure stability, but mitigation is complete.
On June 30th, 2025, between approximately 18:20 and 19:55 UTC, the Copilot service experienced a degradation of the Claude Sonnet 3.7 model due to an issue with our upstream provider. Users encountered elevated error rates when using Claude Sonnet 3.7. No other models were impacted.The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.
We are experiencing degraded availability for the Claude 3.7 Sonnet model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.
We are currently investigating this issue.
Incident With Actions
1 update
Due to a degradation of one instance of our internal message delivery service, a percentage of jobs started between 06/30/2025 19:18 UTC and 06/30/2025 19:50 UTC failed, and are no longer retry-able. Runners assigned to these jobs will automatically recover within 24 hours, but deleting and recreating the runner will free up the runner immediately.
Disruption with some GitHub services
4 updates
On June 26, 2025, between 17:10 UTC and 23:30 UTC, around 40% of attempts to create a repository from a template repository failed. The failures were an unexpected result of a gap in testing and observability.We mitigated the incident by rolling back the deployment.We are working to improve our testing and automatic detection of errors associated with failed template repository creation.
We identified an internal change that was causing errors when creating a repository from a template. This change has now been rolled back, and customers should no longer encounter errors when creating repositories from templates.
We are currently investigating this issue.
Users may experience errors when creating a repository from a template. The error message may prompt the user to delete the repository, however this deletion attempt will not be successful. We are investigating the cause of these errors.
GitHub Enterprise Importer delays
6 updates
On June 26th, between 14:42UTC and 18:05UTC, the GitHub Enterprise Importer (GEI) service was in a degraded state, during which time, customers of the service experienced extended repository migration durations.Our investigation found that the combined effect of several database updates resulted in the severe throttling of GEI to preserve overall database health.We have taken steps to prevent additional impact and are working to implement additional safeguards to prevent similar incidents from occurring in the future.
The earlier delays affecting GitHub Enterprise Importer queries and jobs have now been resolved and are operating normally. Thank you for your patience while we investigated and addressed the issue.
We're continuing to investigate delays with GitHub Enterprise importer, and are investigating potential delays with queries and jobs.Next update in 60 minutes.
We're continuing to investigate delays with GitHub Enterprise importer, and are investigating potential delays with infrastructure. Next update in 60 minutes.
GitHub Enterprise Importer is experiencing degraded throughput, resulting in significant slowdowns in migration processes and extended wait times for customers.
We are currently investigating this issue.
Repository Navigation Bar Missing in GitHub Enterprise Cloud
3 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We have identified that the navigation bar is missing in GitHub Enterprise Cloud with data residency instances for the repositories related pages and are currently attempting a mitigation.
We are currently investigating this issue.
Disruption with the GitHub mobile android application
3 updates
Between June 19th, 2025 11:35 UTC and June 20th, 2025 11:20 UTC the GitHub Mobile Android application was unable to login new users. The iOS app was unaffected.This was due to a new GitHub App feature being tested internally, which was inadvertently enforced for all GitHub-owned applications, including GitHub Mobile.A mismatch in client and server expectations due to this feature caused logins to fail. We mitigated the incident by disabling the feature flag controlling the feature.We are working to improve our time to detection and put in place stronger guardrails that reduce impact from internal testing on applications used by all customers.
We are investigating reports that some users are unable to sign in to the GitHub app on Android. Normal functionality is otherwise available. Our team is actively working to identify the cause.
We are currently investigating this issue.
Disruption with some GitHub services
4 updates
On June 18, 2025 between 22:20 UTC and 23:00 UTC the Claude Sonnet 3.7 and Claude Sonnet 4 models for GitHub Copilot Chat experienced degraded performance. During the impact, some users would receive an immediate error when making a request to a Claude model. This was due to upstream errors with one of our model providers, which have since been resolved. We mitigated the impact by disabling the affected provider endpoints to reduce user impact, redirecting Claude Sonnet requests to additional partners.We are working to update our incident response playbooks for infrastructure provider outages and improve our monitoring and alerting systems to reduce our time to detection and mitigation of issues like this one in the future.
We are experiencing degraded availability for the Claude 4 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected. We recommend using Claude 3.7 as an alternative.
Copilot is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Partial Actions Cache degradation
6 updates
On June 18, 2025, between 08:21 UTC and 18:47 UTC, some Actions jobs experienced intermittent failures downloading from the Actions Cache service. During the incident, 17% of workflow runs experienced cache download failures, resulting in a warning message in the logs and performance degradation. The disruption was caused by a network issue in our database systems that led to a database replica getting out of sync with the primary. We mitigated the incident by routing cache download url requests to bypass the out-of-sync replica until it was fully restored.To prevent this class of incidents, we are developing capability in our database system to more robustly bypass out-of-sync replicas. We are also implementing improved monitoring to help us detect similar issues more quickly going forward.
We are continuing to rollout a mitigation and are progressing towards having this rolled out for all customers.
We are currently deploying a mitigation for this issue and will be rolling it out shortly. We will update our progress as we monitor the deployment.
We are actively investigating and working on a mitigation for database instability leading to replication lag in the Actions Cache service. We will continue to post updates on progress towards mitigation.
We are currently investigating this issue.
The actions cache service is experiencing degradation in a number of regions causing cache misses when attempting to download cache entries. This is not causing workflow failures, but workflow runtime might be elevated for certain runs.
Partial Degradation in Issues Experience
5 updates
On June 18, 2025, between 15:15 UTC and 19:29 UTC, the Issues service was degraded, and certain GraphQL queries accessing the `ReactionGroup.reactors` field returned errors. Our query routing infrastructure was impacted by exceptions from a particular database migration, resulting in errors for an average of 0.0097% of overall GraphQL requests (peaking at 0.02%).We mitigated the incident by reverting the migration.We continue to investigate the cause of the exceptions and are holding off on similar migrations until the underlying issue is understood and resolved.
We have confirmed that we are currently within SLA for Issues experience. Remaining clean up will complete over the next few hours to fully restore the ability to search Issues by reaction as well as related GraphQL API queries.
We have confirmed that impact is restricted to failing to display reactions on some issues and searching issues by reaction. Mitigation is in progress to restore these features and should be fully rolled out to all customers in the next few hours.
Some users are seeing errors when accessing issues on GitHub. We have identified the problem and are working on a revert to restore full functionality.
We are investigating reports of degraded performance for Issues
Incident with multiple GitHub services
23 updates
On June 17, 2025, between 19:32 UTC and 20:03 UTC, an internal routing policy deployment to a subset of network devices caused reachability issues for certain network address blocks within our datacenters.Authenticated users of the github.com UI experienced 3-4% error rates for the duration. Authenticated callers of the API experienced 40% error rates. Unauthenticated requests to the UI and API experienced nearly 100% error rates for the duration. Actions service experienced 2.5% of runs being delayed for an average of 8 minutes and 3% of runs failing. Large File Storage (LFS) requests experienced 0.978% errors.At 19:54 UTC, the deployment was rolled back, and network availability for the affected systems was restored. At 20:03 UTC, we fully restored normal operations.To prevent similar issues, we are expanding our validation process for routing policy changes.
Actions is operating normally.
Codespaces is experiencing degraded performance. We are continuing to investigate.
Webhooks is operating normally.
Pull Requests is operating normally.
API Requests is operating normally.
Issues is operating normally.
API Requests is experiencing degraded performance. We are continuing to investigate.
Copilot is operating normally.
We experienced problems with multiple services, causing disruptions for some users. We have identified the cause and are rolling out changes to restore normal service. Many services are recovering, but full recovery is ongoing.
Pages is operating normally.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Copilot is experiencing degraded performance. We are continuing to investigate.
Pull Requests is experiencing degraded availability. We are continuing to investigate.
Webhooks is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of issues with many services impacting segments of customers. We will continue to keep users updated on progress towards mitigation.
API Requests is experiencing degraded availability. We are continuing to investigate.
API Requests is experiencing degraded performance. We are continuing to investigate.
Copilot is experiencing degraded availability. We are continuing to investigate.
Pages is experiencing degraded performance. We are continuing to investigate.
Issues is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Copilot
Incident with Actions
3 updates
Multiple services critical to GitHub's attestation infrastructure experienced an outage which prevented Fulcio from issuing signing certificates. During the outage, GitHub customers who use the "actions/attest-build-provenance" action from public repositories were not able to generate attestations.
Customers are currently unable to generate attestations from public repositories due to a broader outage with our partners.
We are investigating reports of degraded performance for Actions
Some Copilot chat models are failing requests
9 updates
All impacted chat models have recovered, and users should no longer experience reduced availability.
On June 12, 2025, between 17:55 UTC and 21:07 UTC the GitHub Copilot service was degraded and experienced unavailability for Gemini models and reduced availability for Claude models. Users experienced significantly elevated error rates for code completions, slow response times, timeouts, and chat functionality interruptions across VS Code, JetBrains IDEs, and GitHub Copilot Chat. This was due to an outage affecting one of our model providers.We mitigated the incident by temporarily disabling the affected provider endpoints to reduce user impact.We are working to update our incident response playbooks for infrastructure provider outages and improve our monitoring and alerting systems to reduce our time to detection and mitigation of issues like this one in the future.
We are seeing recovery in success rates for impacted Claude models (Sonnet 4 and Opus 4), and limited recovery in Gemini models (2.5. Pro and 2.0 Flash). We will continue to monitor and provide updates until full recovery.
Copilot is experiencing degraded performance. We are continuing to investigate.
Claude Sonnet 4 and Opus 4 models continue to have degraded availability in Copilot Chat, VS Code, and other Copilot products. Gemini Pro 2.5 and 2.0 Flash are currently unavailable. Our upstream model provider has indicated that they have identified the problem and are applying mitigations.
Gemini (2.5 Pro and 2.0 Flash) and Claude (Sonnet 4 and Opus 4) chat models in Copilot are still experiencing reduced availability. We are actively communicating with our upstream model provider to resolve the issue and restore full service. We will provide another update by 20:15 UTC.
We redirected requests for Claude 3.7 Sonnet to additional partners and users should see recovery when using that model. We still are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products.
We are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 3.7, Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.
We are currently investigating this issue.
Disruption with some GitHub services
9 updates
Between 2025-06-10 12:25 UTC and 2025-06-11 01:51 UTC, GitHub Enterprise Cloud (GHEC) customers with approximately 10,000 or more users, saw performance degradation and 5xx errors when loading the Enterprise Settings’ People management page. Less than 2% of page requests resulted in an error. The issue was caused by a database change that replaced an index required for the page load. The issue was resolved by reverting the database change.To prevent similar incidents, we are improving the testing and validation process for replacing database indexes.
Fix is currently rolling out to production. We will update here once we verify.
We are working to deploy the fix for this issue. We will update again once it is deployed and as we monitor recovery.
We have the fix ready, once it's ready to deploy we will provide another update confirming that it has resolved the issue.
We have identified the solution to the performance issue and are working on the mitigation. Impact continues to be limited to very large enterprise customers when viewing the People page.
The mitigation to add a supporting index to improve the performance of the People page did not resolve the issue, and we are continuing to investigate a solution.
We are working on the mitigation and anticpate recovery within an hour.
Large enterprise customers may encounter issues loading the People page
We are currently investigating this issue.
Codespaces billing is delayed
3 updates
On June 10, 2025, between 12:15 UTC and 19:04 UTC, Codespaces billing data processing experienced delays due to capacity issues in our worker pool. Approximately 57% of codespaces were affected during this incident, during which some customers may have observed incomplete or delayed billing usage information in their dashboards and usage reports, and may not have received timely notifications about approaching usage or spending limits. The incident was caused by an increase in the number of jobs in our worker pool without a corresponding increase in capacity, resulting in a backlog of unprocessed Codespaces billing jobs. We mitigated the issue by scaling up worker capacity, allowing the backlog to clear and billing data to catch up. We started seeing recovery immediately at 17:40 UTC and were fully caught up by 19:04 UTC.To prevent recurrence, we are moving critical billing jobs into a dedicated worker pool monitored by the Codespaces team, and are reviewing alerting thresholds to ensure more rapid detection and mitigation of delays in the future.
We've increased capacity to process the codespaces billing jobs and see are seeing recovery, we expect a full mitigation within the hour.
We are currently investigating this issue.
Incident with Pull Requests
2 updates
On June 10, 2025, between 14:28 UTC and 14:45 UTC the pull request service experienced a period of degraded performance, resulting in merge error rates exceeding 1%. The root cause was an overloaded host in our Git infrastructure.We mitigated the incident by removing this host from the actual set of valid replicas until the host was healthy again.We are working to improve the various mechanisms that are in place in our existing infrastructure to protect us from such problems, and we will be revisiting why in this particular scenario they didn't protect us as expected.
We are investigating reports of degraded performance for Pull Requests
Incident With Copilot
1 update
On June 6, 2025, an update to mitigate a previous incident led to automated scaling of database infrastructure used by Copilot Coding Agent. The clients of the service were not implemented to automatically handle an extra partition. Hence it was unable to retrieve data across partitions, resulting in unexpected 404 errors.As a result, approximately 17% of coding sessions displayed an incorrect final state - such as sessions appearing in-progress when they were actually completed. Additionally, some Copilot-authored pull requests were missing timeline events indicating task completion. Importantly, this did not affect Copilot Coding Agent’s ability to finish code tasks and submit pull requests.To prevent similar issues in the future we are taking steps to improve our systems and monitoring.
Incident with Copilot
7 updates
Copilot is operating normally.
On June 6, 2025, between 00:21 UTC to 12:40 UTC the Copilot service was degraded and a subset of Copilot Free users were unable to sign up for or use the Copilot Free service on github.com. This was due to a change in licensing code that resulted in some users losing access despite being eligible for Copilot Free.We mitigated this through a rollback of the offending change at 11:39 AM UTC. This resulted in users once again being able to utilize their Copilot Free access.As a result of this incident, we have improved monitoring of Copilot changes during rollout. We are also working to reduce our time to detect and mitigate issues like this one in the future.
We are continuing to monitor recovery and expect a complete resolution very shortly.
The changes have been reverted and we are seeing signs of recovery. We expect impact to be largely mitigated, but are continuing to monitor and will update further as progress continues.
We have identified changes that may be causing the issue and are working to revert the offending changes. We will continue to keep users updated as we work toward mitigation.
We are investigating reports of users unable to utilize Copilot Free after a trial subscription has ended for Copilot Pro. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded performance for Copilot
Incident with Actions
6 updates
On June 5th, 2025, between 17:47 UTC and 19:20 UTC the Actions service was degraded, leading to run start delays and intermittent job failures. During this period, 47.2% of runs had delayed starts, and 21.0% of runs failed. The impact extended beyond Actions itself - 60% of Copilot Coding Agent sessions were cancelled, and all Pages sites using branch-based builds failed to deploy (though Pages serving remained unaffected). The issue was caused by a spike in load between internal Actions services exposing a misconfiguration that caused throttling of requests in the critical path of run starts. We mitigated the incident by correcting the service configuration to prevent throttling and have updated our deployment process to ensure the correct configuration is preserved moving forward.
We have applied a mitigation and we are beginning to see recovery. We are continuing to monitor for recovery.
Actions is experiencing degraded availability. We are continuing to investigate.
Users of Actions will see delays in jobs starting or job failures. Users of Pages will see slow or failed deployments
Pages is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Actions
Incident with Actions
4 updates
On June 4, 2025, between 14:35 UTC and 15:50 UTC , the Actions service experienced degradation, leading to run start delays. During the incident, about 15.4% of all workflow runs were delayed by an average of 16 minutes. An unexpected load pattern revealed a scaling issue in our backend infrastructure. We mitigated the incident by blocking the requests that triggered this pattern. We are improving our rate limiting mechanisms to better handle unexpected load patterns while maintaining service availability. We are also strengthening our incident response procedures to reduce the time to mitigate for similar issues in the future.
We have applied mitigations and are monitoring for recovery.
We are currently investigating delays with Actions triggering for some users.
We are investigating reports of degraded performance for Actions
Disruption with some GitHub services
4 updates
On May 30, 2025, between 08:10 UTC and 16:00 UTC, the Microsoft Teams GitHub integration service experienced a complete service outage. During this period, the service was unable to deliver notifications or process user requests, resulting in a 100% error rate for all integration functionality except link previews.This outage was due to an authentication issue with our downstream provider. We mitigated the incident by working with our provider to restore service functionality and are working to migrate to more durable authentication methods to reduce the risk of similar issues in the future.
Our team is continuing to work to mitigate the source of the disruption affecting a small set of customers using the GitHub Microsoft Teams integration.
We are experiencing a disruption with our Microsoft Teams integration. Investigations are underway and we will provide further updates as we progress.
We are currently investigating this issue.
Disruption with some GitHub services
7 updates
On May 28, 2025, from approximately 09:45 UTC to 14:45 UTC, GitHub Actions experienced delayed job starts for workflows in public repos using Ubuntu-24 standard hosted runners. This was caused by a misconfiguration in backend caching behavior after a failover, which led to duplicate job assignments and reduced available capacity. Approximately 19.7% of Ubuntu-24 hosted runner jobs on public repos were delayed. Other hosted runners, self-hosted runners, and private repo workflows were unaffected.By 12:45 UTC, we mitigated the issue by redeploying backend components to reset state and scaling up available resources to more quickly work through the backlog of queued jobs. We are working to improve our deployment and failover resiliency and validation to reduce the likelihood of similar issues in the future.
We are continuing to monitor the affected Actions runners to ensure a smooth recovery.
We are observing indications of recovery with the affected Actions runners.The team will continue monitoring systems to ensure a return to normal service.
We're continuing to investigate delays in Actions runners for hosted Ubuntu 24.We will provide further updates as more information becomes available.
Actions is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing high wait times for obtaining standard hosted runners for ubuntu 24. Other hosted labels and self-hosted runners are not impacted.
We are currently investigating this issue.
Incident with Actions
4 updates
On May 27, 2025, between 09:31 UTC and 13:31 UTC, some Actions jobs experienced failures uploading to and downloading from the Actions Cache service. During the incident, 6% of all workflow runs couldn’t upload or download cache entries from the service, resulting in a non-blocking warning message in the logs and performance degradation. The disruption was caused by an infrastructure update related to the retirement of a legacy service, which unintentionally impacted Cache service availability. We resolved the incident by reverting the change and have since implemented a permanent fix to prevent recurrence.We are improving our configuration change processes by introducing additional end-to-end tests to cover the identified gaps, and implementing deployment pipeline improvements to reduce mitigation time for similar issues in the future.
Mitigation is applied and we’re seeing signs of recovery. We’re monitoring the situation until the mitigation is applied to all affected repositories.
We are experiencing degradation with the GitHub Actions cache service and are working on applying the appropriate mitigations.
We are investigating reports of degraded performance for Actions
Disruption with some GitHub services
2 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We are currently investigating this issue.
[Retroactive] Incident with Git Operations
1 update
Between 10:00 and 20:00 UTC on May 27, a change to our git proxy service resulted in some git client implementations not being able to consistently push to GitHub. Reverting the change resulted in an immediate resolution of the problem for all customers. The inflated time to detect this failure was due to the relatively few impacted clients. We are re-evaluating the proposed change to understand how we canprevent and detect such failures in the future.
We're experiencing errors
14 updates
On May 26, 2025, between 06:20 UTC and 09:45 UTC GitHub experienced broad failures across a variety of services (API, Issues, Git, etc). These were degraded at times, but peaked at 100% failure rates for some operations during this time.On May 23, a new feature was added to Copilot APIs and monitored during rollout but it was not tested at peak load. At 6:20 UTC on May 26, load increased on the code path in question and started to degrade a Copilot API because the caching for this endpoint and circuit breakers for high load were misconfigured.In addition, the traffic limiting meant to protect wider swaths of the GitHub API from queuing was not yet covering this endpoint, meaning it was able to overwhelm the capacity to serve traffic and cause request queuing.We were able to mitigate the incident by turning off the endpoint until the behavior could be reverted.We are already working on a quality of service strategy for API endpoints like this that will limit the impact of a broad incident and are rolling it out. We are also addressing the specific caching and circuit breaker misconfigurations for this endpoint, which would have reduced the time to mitigate this particular incident and the blast radius.
We continue to see signs of recovery.
Issues is operating normally.
Git Operations is operating normally.
API Requests is operating normally.
Packages is operating normally.
Copilot is operating normally.
Actions is operating normally.
Packages is experiencing degraded performance. We are continuing to investigate.
Copilot is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing degraded performance. We are continuing to investigate.
We are continuing to investigate degraded performance.
Issues is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for API Requests and Git Operations
Disruption with some GitHub services
4 updates
API Requests is operating normally.
On May 23, 2025, between 17:40 UTC and 18:30 UTC public API and UI requests to read and write Git repository content were degraded and triggered user-facing 500 responses. On average, the error rate was 61% and peaked at 88% of requests to the service. This was due to the introduction of an uncaught fatal error in an internal service. A manual rollback was required which increased the time to remediate the incident.We are working to automatically detect and revert a change based on alerting to reduce our time to detection and mitigation. In addition, we are adding relevant test coverage to prevent errors of this type getting to production.
API Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Delayed GitHub Actions Jobs
6 updates
We've applied a mitigation which has resolved these delays.
On May 22, 2025, between 07:06 UTC and 09:10 UTC, the Actions service experienced degradation, leading to run start delays. During the incident, about 11% of all workflow runs were delayed by an average of 44 minutes. A recently deployed change contained a defect that caused improper request routing between internal services, resulting in security rejections at the receiving endpoint. We resolved this by reverting the problematic change and are implementing enhanced testing procedures to catch similar issues before they reach production environments.
Our investigation continues. At this stage GitHub Actions Jobs are being executed, albeit with delays to the start of execution in some cases.
We are continuing to investigate these delays.
We're investigating delays with the execution of queued GitHub Actions jobs.
We are investigating reports of degraded performance for Actions
Incident with Webhooks
1 update
A change to the webhooks UI removed the ability to add webhooks. The timeframe of this impact was between May 20th, 2025 20:40 UTC and May 21st, 2025 12:55 UTC. Existing webhooks, as well as adding webhooks via the API were unaffected. The issue has been fixed.
Incident with Copilot
4 updates
On May 20, 2025, between 18:18 UTC and 19:53 UTC, Copilot Code Completions were degraded in the Americas. On average the error rate was 50% of requests to the service in the affected region. This was due to a misconfiguration in load distribution parameters after a scale down operation.We mitigated the incident by addressing the misconfiguration.We are working to improve our automated failover and load balancing mechanisms to reduce our time to detection and mitigation of issues like this one in the future.
Copilot is operating normally.
We are experiencing degraded availability for Copilot Code Completions in the America’s.We are working on resolving the issue.
We are investigating reports of degraded performance for Copilot
Elevated error rates for Claude Sonnet 3.7
7 updates
Copilot is operating normally.
On May 20, 2025, between 12:09 PM UTC and 4:07 PM UTC, the GitHub Copilot service experienced degraded availability, specifically for the Claude Sonnet 3.7 model. During this period, the success rate for Claude Sonnet 3.7 requests was highly variable, down to approximately 94% during the most severe spikes. Other models remained available and working as expected throughout the incident.The issue was caused by capacity constraints in our model processing infrastructure that affected our ability to handle the large volume of Claude Sonnet 3.7 requests.We mitigated the incident by rebalancing traffic across our infrastructure, adjusting rate limits, and working with our infrastructure teams to resolve the underlying capacity issues. We are working to improve our infrastructure redundancy and implementing more robust monitoring to reduce detection and mitigation time for similar incidents in the future.
The issues with our upstream model provider have been resolved, and Claude Sonnet 3.7 is once again available in Copilot Chat, VS Code and other Copilot products.We will continue monitoring to ensure stability, but mitigation is complete.
We are continuing to work with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
We’re still working with our model providers on mitigations to increase the success rate of Sonnet 3.7 requests made via Copilot.
We are experiencing degraded availability for the Claude Sonnet 3.7 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.
We are investigating reports of degraded performance for Copilot
8 updates
Between May 16, 2025, 1:21 PM UTC and May 17, 2025, 2:26 AM UTC, the GitHub Enterprise Importer service was degraded and experienced slow processing of customer migrations. Customers may have seen extended wait times for migrations to start or complete.This incident was initially observed as a slowdown in migration processing. During our investigation, we identified that a recent change aimed at improving API query performance caused an increase in load signals, which triggered migration throttling. As a result, the performance of migrations was negatively impacted, and overall migration duration increased. In parallel, we identified a race condition that caused a specific migration to be repeatedly re-queued, further straining system resources and contributing to a backlog of migration jobs, resulting in accumulated delays. No data was lost, and all migrations were ultimately processed successfully.We have reverted the feature flag associated with a query change and are working to improve system safeguards to help prevent similar race condition issues from occurring in the future.
We continue to see signs of recovery for GitHub Enterprise Importer migrations. Queue depth is decreasing and migration duration is trending toward normal levels. We will continue to monitor improvements.
We have identified the source of increased load and have started mitigation. Customers using the GitHub Enterprise Importer may still see extended wait times until recovery completes.
Investigations on the incident impacting GitHub Enterprise Importer continue. An additional contributing cause has been identified, and we are working to ship additional mitigating measures.
We have taken several steps to mitigate the incident impacting GitHub Enterprise Importer (GEI). We are seeing early indications of system recovery. However, customers may continue to experience longer migrations and extended queue times. The team is continuing to work on further mitigating efforts to speed up recovery.
We are continuing to investigate issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
We are investigating issues with the GitHub Enterprise Importer. Customers may experience slower migration processes and extended wait times.
We are currently investigating this issue.
Disruption with some GitHub services
3 updates
On May 16th, 2025, between 08:42:00 UTC and 12:26:00 UTC, the data store powering the Audit Log API service experienced elevated latency resulting in higher error rates due to timeouts. About 3.8% of Audit Log API queries for Git events experienced timeouts. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
We are investigating issues with the audit log. Users querying Git audit log data may observe increased latencies and occasional timeouts.
We are currently investigating this issue.
Disruption with Gemini 2.5 Pro
5 updates
Between May 15, 2025 10:10 UTC and May 15, 2025 22:58 UTC the Copilot service was degraded and returned a high volume of internal server errors for requests targeting Gemini 2.5 Pro, a public preview model. This was due to a high volume of rate limiting by the upstream model provider, similar in volume to the internal server errors during the previous day.We mitigated the incident by temporarily disabling Gemini 2.5 Pro for all Copilot Chat experiences, and then worked with the model provider to ensure model health was sufficiently improved before re-enabling.We are working with the model provider to move to more resilient infrastructure to mitigate issues like this one in the future.
The issues with our upstream model provider have been resolved, and Gemini 2.5 Pro is available again in Copilot Chat, VS Code, and other Copilot products.We will continue monitoring to ensure stability, but mitigation is complete.
We have started to gradually re-enable the Gemini 2.5 Pro model in Copilot Chat, VS Code, and other Copilot products.
We have disabled the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products due to an issue with an upstream model provider.Users may still see these models as available for a brief period but we recommend switching to a different model. Other models are not impacted and are available.Once our model provider has resolved the issues impacting Gemini 2.5 Pro, we will re-enable it.
We are currently investigating this issue.
Disruption with some GitHub services
7 updates
The issue preventing users from creating Personal Access Tokens (PATs) has been resolved. The root cause was identified and a change was reverted to restore functionality. PAT generation is now working as expected.
On May 15, 2025, between 00:08 AM UTC and 10:21 AM UTC, customers were unable to create fine-grained Personal Access Tokens (PATs) on github.com. This incident was triggered by a recent code change to our front end that unintentionally affected the way certain pages loaded and prevented the PAT creation process from completing.We mitigated the incident by reverting the problematic change. To reduce the likelihood of similar issues in the future, we are improving our monitoring for page load anomalies and PAT creation failures and improving our safe deployment practices.
We have identified the cause, and have a working fix. We will continue to update users.
We are exploring the best path forward, but no new update at this stage.
While we have found a possible cause, we have no update on mitigation steps at this stage. We will continue to keep users updated.
We are investigating fine grained PAT creation failures. We will continue to keep users updated on progress towards mitigation. Existing FGP's are unaffected.
We are currently investigating this issue.
Disruption with Gemini 2.5 Pro model
7 updates
We have received confirmation from our upstream provider that the issue has been resolved. We are seeing significant recovery. The Gemini 2.5 Pro model is now fully available in Copilot Chat, VS Code, and other Copilot products.
Between May 14, 2025 14:16 UTC and May 15, 2025 01:02 UTC the Copilot service was degraded and returned a high volume of internal server errors for requests targeting Gemini 2.5 Pro, a public preview model. On average, the error rate for Gemini 2.5 Pro was 19.6% and peaked at 41%. This was due to a high volume of internal server errors and rate limiting by the upstream model provider.We mitigated the incident by temporarily disabling Gemini 2.5 Pro for all Copilot Chat experiences, and then worked with the model provider to ensure model health was sufficiently improved before re-enabling.We are working with partners to improve communication speed and are planning to move to more resilient infrastructure to mitigate issues like this one in the future.
We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. We are working closely with our upstream provider to resolve this issue.
We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
We are experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.
We keep investigating issues with Gemini 2.5 Pro model which is in Public Preview. Users may see intermittent errors with this model.
We are currently investigating this issue.
Disruption with some GitHub services
2 updates
This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
We are currently investigating this issue.
Incident with Git Operations
6 updates
On May 8, 2025, between 14:40 UTC and 16:27 UTC the Git Operations service was degraded causing some pushes and merges to fail. On average, the error rate was 1.4% with a peak error rate of 2.24%. This was due to a configuration change which unexpectedly led a critical service to shut down on a subset of hosts that store repository data.We mitigated the incident by re-deploying the affected service to restore its functionality.In order to prevent similar incidents from happening again, we identified the cause that triggered this behavior and mitigated it for future deployments. Additionally, to reduce time to detection we will improve monitoring of the impacted service.
Pull Requests is operating normally.
Actions is operating normally.
We have identified the issue and applied mitigations, and are monitoring for recovery.
Actions is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for Git Operations and Pull Requests
Issue Attachments Failing to Upload
4 updates
We have identified the underlying cause of attachment upload failures to Issues and mitigated it by rolling back a feature flag. If you are still experiencing failures when uploading attachments to Issues, please reload your page.
On May 1, 2025 from 22:09 UTC to 23:13 UTC, the Issues service was degraded and users weren't able to upload attachments. The root cause was identified to be a new feature which added a custom header to all client-side HTTP requests, causing a CORS errors when uploading attachments to our provider.We mitigated the incident by rolling back the feature flag that added the new header at 22:56 UTC. In order to prevent this from happening again, we are adding new metrics to monitor and ensure the safe rollout of changes to client-side requests.
We are investigating attachment upload failures on Issues. We will continue to keep users updated on progress towards mitigation.
We are investigating reports of degraded availability for Issues
Disruption with Pull Request Ref Updates
3 updates
On April 30, 2025, between 8:02 UTC and 9:05 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures, delays for non-migration sourced jobs, and delays to tracking refs.We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.We mitigated the incident by shifting the migration jobs to a different job queue.To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
Some customers of github.com are reporting issues with PR tracking refs not being updated due to processing delays and increased failure rates. We're investigating the source of the issue.
We are investigating reports of degraded performance for Pull Requests
Delays for web and email notification delivery
8 updates
The notification delivery backlog has been processed and notifications are now being delivered as expected.
On April 29th, 2025, between 8:40am UTC and 12:50pm UTC the notifications service was degraded and stopped delivering most web and email notifications as well as some mobile push notifications. This was due to a large and faulty schema migration that rendered a set of database primaries unhealthy, affecting the notification delivery pipelines, causing delays in the most of the web and email notification deliveries.We mitigated the incident by stopping the migration and promoting replicas to replace the unhealthy primaries.In order to prevent similar incidents in the future, we are addressing the underlying issues in the online schema tooling and improving the way we interact with the database to not be disruptive to production workloads.
New notification deliveries are occuring in a timely manner and we have processed a significant portion of the backlog. Users may still notice delayed delivery of some older notifications.
Web and email notifications continue to be delivered successfully and the service is in a healthy state. We are processing the backlog of notification deliveries which are currently as much as 30-60 minutes delayed.
We are starting to see signals of recovery with delayed web/email notifications now being dispatched.The team continue to monitor recovery and ensure return to normal service.
We are seeing impact on both web and email notifications with most customers seeing delayed deliveries. The last incident updated regarding impact on email notifications was incorrect. Email notifications have been experiencing the same delays as web notifications for duration of incident.We have applied changes to our system and are monitoring to see if these restore normal service. Updates to follow.
Web notifications are experiencing delivery delays for the majority of customers. We are working to mitigate impact and restore delivery times back within normal operating bounds.Email notifications remain unaffected and are delivering as normal.We will provide futher updates as we have more information.
We are currently investigating this issue.
Incident with Git Operations, API Requests and Issues
8 updates
On April 28th, 2025, between 4AM and 11AM UTC, ~0.5% of customers experienced HTTP 500 or 429 responses for raw file access (via the GitHub website and APIs). Additionally, ~0.5% of customers may have seen slow pull request page loads and increased timeouts in the GraphQL API. The incident was caused by queueing in serving systems due to a change in traffic patterns, specifically scraping activity targeting our API. We have adjusted limits and added flow control to systems in response to the changing traffic patterns to improve our ability to prevent future large queueing issues. We’ve additionally updated rate limiting unauthenticated requests to reduce overall load, more details are here: https://github.blog/changelog/2025-05-08-updated-rate-limits-for-unauthenticated-requests/
We are seeing signs of recovery and continue to monitor latency.
We continue to investigate impact to Issues and Pull Requests. Customers may see some timeouts as we work towards mitigation.
We are continuing to investigate impact to Issues and Pull Requests. We will provide more updates as we have them.
Users may see timeouts when viewing Pull Requests. We are still investigating the issues related to Issues and Pull Requests and will provide further updates as soon as we can
Pull Requests is experiencing degraded performance. We are continuing to investigate.
Issues API is currently seeing elevated latency. We are investigating the issue and will provide further updates as soon as we have them.
We are investigating reports of degraded performance for API Requests, Git Operations and Issues
Disruption with some GitHub services
3 updates
Starting at 19:13:50 UTC, the service responsible for importing Git repositories began experiencing errors that impacted both GitHub Enterprise Importer migrations and the GitHub Importer which were restored at 22:11:00 UTC. At the time, 837 migrations across 57 organizations were affected. Impacted migrations would have shown the error message "Git source migration failed. Error message: An error occurred. Please contact support for further assistance." in the migration logs and required a retry.The root cause of the issue was a recent configuration change that caused our workers, responsible for syncing the Git repository, to lose the necessary access required for the migration. We were able to retrieve the needed access for the workers , and all dependent services resumed normal operation.We’ve identified and implemented additional safeguards to help prevent similar disruptions in the future.
We are investigating issues with GitHub Enterprise Importer. We will continue to keep users updated on progress towards mitigation.
We are currently investigating this issue.
Incident with Issues, API Requests and Pages
10 updates
On April 23, 2025, between 07:00 UTC and 07:20 UTC, multiple GitHub services experienced degradation caused by resource contention on database hosts. The resulting error rates, which ranged from 2–5% of total requests, led to intermittent service disruption for users. The issue was triggered by heavy workloads on the database leading to connection saturation.The incident mitigated when the database throttling activated which allowed the system to rebalance the connections. This restored the traffic flow to the database and restored service functionality.To prevent similar issues in the future, we are reviewing the capacity of the database, improving monitoring and alerting systems, and implementing safeguards to reduce time to detection and mitigation.
A brief problem with one of our database clusters caused intermittent errors around 07:05 UTC for a few minutes. Our systems have recovered and we continue to monitor.
Issues is operating normally.
API Requests is operating normally.
Pages is operating normally.
Actions is operating normally.
Codespaces is operating normally.
Codespaces is experiencing degraded performance. We are continuing to investigate.
Actions is experiencing degraded performance. We are continuing to investigate.
We are investigating reports of degraded performance for API Requests, Issues and Pages
Incident with Pull Requests
6 updates
Pull Requests is operating normally.
On April 16, 2025 between 3:22:36 PM UTC and 5:26:55 PM UTC the Pull Request service was degraded. On average, 0.7% of page views were affected. This primarily affected logged-out users, but some logged-in users were affected as well. This was due to an error in how certain Pull Request timeline events were rendered, and we resolved the incident by updating the timeline event code.We are enhancing test coverage to include additional scenarios and piloting new tools to prevent similar incidents in the future.
The fix is rolling out and we're seeing recovery for users encountering 500 errors when viewing a pull request.
The fix is currently being deployed, we anticipate this to be fully mitigated in approximately thirty minutes.
Users may experience 500 errors when viewing a PR. Most of the impact is limited to anonymous access there is a small handful of logged in users who are also experiencing this. We have the fix prepared and it will be deployed soon.
We are investigating reports of degraded performance for Pull Requests
Disruption with some GitHub services
11 updates
On April 15th during regular testing we found a bug in our Copilot Metrics Pipeline infrastructure causing some data used to aggregate Copilot usage for the Copilot Metrics API to not be ingested. As a result of the bug, customer metrics in the Copilot Metrics API would have indicated lower than expected Copilot usage for the previous 28 days.To mitigate the incident we resolved the bug so that all data from April 14th onwards would be accurately calculated and immediately began backfilling the previous 28 days with the correct data. All data has been corrected as of 2025-04-17 5:34PM UTC.We have added additional monitoring to catch similar pipeline failures in the future earlier and are working on enhancing our data validation to ensure that all metrics we provide are accurate.
We have resolved issues with data inconsistency for Copilot Metrics API data as of April 17th 2025 1600 UTC. All data is now accurate.
We are continuing to work on correcting the Copilot Metrics API data from March 19th 2025 to April 14th 2025. Data from April 15 and later is accurate. Currently, the API returns about 10% lower usage numbers. Based on the current investigations we estimate to have a resolution by April 18th 0100 hrs UTC. We will provide an update if there is change in the ETA.
We have an updated ETA on correcting all Copilot metrics API data: 20 hours. We won't post more updates here unless the ETA changes.
We are working on correcting the Copilot metrics API source data from March 19th to April 14th. Currently, the API returns about 10% lower usage numbers than the reality. We don't have an ETA for the resolution at the moment.
The Copilot metrics API (https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28) now returns accurate data for April 15th. We're working on correcting the past 27 days, as we are under-reporting certain metrics from this time.
We'll have accurate data for April 15th in the next 60 minutes. We're still working on correcting the data for the additional 27 days before April 15th. The complete correction is estimated to take up to 7 days, but we're working to speed this up.https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28 is the specific impacted API.
As we've made further progress on correcting the inconsistencies, we estimate it will take approximately a week for a full recovery. We are investigating options for speeding up the recovery, and we appreciate your patience as we work through this incident.
We are working on correcting the inconsistencies now, our next update we will provide an estimated time when the issue will be fully resolved.
We are currently investigating this issue.
We are currently experiencing degraded performance with our Copilot metrics API, which is temporarily causing partial inconsistencies in the data returned. Our engineering teams are actively working to restore full functionality. We understand the importance of timely updates and are prioritizing a resolution to ensure all systems are operating normally as quickly as possible.
Disruption with some GitHub services for Safari Users
6 updates
On April 15, 2025 from 12:45 UTC to 13:56 UTC, access to GitHub.com was restricted for logged out users using WebKit-based browsers, such as Safari and various mobile browsers. During the impacting time, roughly 6.6M requests were unsuccessful.This issue was caused by a configuration change intended to improve our handling of large traffic spikes but was improperly targeted at too large a set of requests.To prevent future incidents like this, we are improving how we operationalize these types of changes, adding additional tools for validating what will be impacted by such changes, and reducing the likelihood of manual mistakes through automated detection and handling of such spikes.
Safari users are now able to access GitHub.com.The fix has been rolled out to all environments.
Most unauthenticated Safari users should now be able to access github.com. We are ensuring the fix is deployed out to all environments.Next update in 30m.
We have identified the cause of the restriction for Safari users and are deploying a fix. Next update in 15 minutes.
We are currently investigating this issue.
Some unauthenticated Safari users are seeing the message "Access to this site has been restricted." We are currently investigating this behavior.
[Retroactive] Access from China temporarily blocked for users that were not logged in
1 update
Due to a configuration change with unintended impact, some users that were not logged in who tried to visit GitHub.com from China were temporarily unable to access the site. For users already logged in, they could continue to access the site successfully. Impact started 2025/04/12 at 20:01 UTC. Impact was mitigated 2025/04/13 at 14:55 UTC. During this time, up to 4% of all anonymous requests originating from China were unsuccessful.The configuration changes that caused this impact have been reversed and users should no longer see problems when trying to access GitHub.com.
Incident with Codespaces
4 updates
On April 11 from 3:05am UTC to 3:44am UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency. We reverted the changes and immediately restored service health.We are working on safer mechanisms for testing and rolling out such configuration changes, and we expect no further disruptions.
We have reverted a problematic configuration change and are seeing recovery across starts and resumes
We have identified an issue that is causing errors when starting new and resuming existing Codespaces. We are currently working on a mitigation
We are investigating reports of degraded availability for Codespaces
Disruption with some Pull Requests stuck in processing state
6 updates
On April 9, 2025, between 11:27 UTC and 12:39 UTC, the Pull Requests service was degraded and experienced delays in processing updates. At peak, approximately 1–1.5% of users were affected by delays in synchronizing pull requests. During this period, users may have seen a "Processing updates" message in their pull requests after pushing new commits, and the new commits did not appear in the Pull Request view as expected. The Pull Request synchronization process has automatic retries and most delays were automatically resolved. Any Pull Requests that were not resynchronized during this window were manually synchronized on Friday, April 11 at 14:23 UTC.This was due to a misconfigured GeoIP lookup file that our routine GitHub operations depended on and led to background job processing to fail. We mitigated the incident by reverting to a known good version of the GeoIP lookup file on affected hosts.We are working to enhance our CI testing and automation by validating GeoIP metadata to reduce our time to detection and mitigation of issues like this one in the future.
Pull Requests is operating normally.
The team has identified a mitigation and is rolling it out while actively monitoring recovery
Some users are experiencing delays in pull request updates. After pushing new commits, PRs show a "Processing updates" message, and the new commits do not appear in the pull request view.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Incident with Pull Requests
3 updates
On April 9, 2025, between 7:01 UTC and 9:31 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures and delays for non-migration sourced jobs.We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.We mitigated the incident by shifting the migration jobs to a different job queue. To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.
We saw a period of delays on Pull request experiences. The impact is over at the moment, but we are investigating to prevent a repeat.
We are investigating reports of degraded performance for Pull Requests
4 updates
On 2025-04-08, between 00:42 and 18:05 UTC, as we rolled out an updated version of our GPT 4o model, we observed that vision capabilities for GPT-4o for Copilot Chat in GitHub were intermittently unavailable. During this period, customers may have been unable to upload image attachments to Copilot Chat in GitHub. In response, we paused the rollout at 18:05 UTC. Recovery began immediately and telemetry indicates that the issue was fully resolved by 18:21 UTC. Following this incident, we have identified areas of improvements in our model rollout process, including enhanced monitoring and expanded automated and manual testing of our end-to-end capabilities.
The issue has been resolved now, and we're actively monitoring the service for any further issues.
Image attachments are not available for some models on Copilot chat on github.com. The issue has been identified and the fix is in progress.
We are currently investigating this issue.
Disruption with some GitHub services
4 updates
Pull Requests is operating normally.
On April 7, 2025 between 2:15:37 AM UTC and 2:31:14 AM UTC, multiple GitHub services were degraded. Requests to these services returned 5xx errors at a high rate due to an internal database being exhausted by our Codespaces service. The incident mitigated on its own.We have addressed the problematic queries from the Codespaces service, minimizing the risk of future reoccurrances.
Pull Requests is experiencing degraded performance. We are continuing to investigate.
We are currently investigating this issue.
Disruption with some GitHub services
3 updates
On 2025-04-03, between 6:13:27 PM UTC and 7:12:00 PM UTC the docs.github.com service was degraded and errored. On average, the error rate was 8% and peaked at 20% of requests to the service. This was due to a misconfiguration and elevated requests.We mitigated the incident by correcting the misconfiguration.We are working to reduce our time to detection and mitigation of issues like this one in the future.
We are investigating and working on applying mitigations to intermittent unavailability of GitHub's Docs.
We are currently investigating this issue.
Disruption with some GitHub services
3 updates
Between 2025-03-27 12:00 UTC and 2025-04-03 16:00 UTC, the GitHub Enterprise Cloud Dormant Users report was degraded and falsely indicated that dormant users were active within their business. This was due to increased load on a database from a non-performant query.We mitigated the incident by increasing the capacity of the database, and installing monitors for this specific report to improve observability for future. As a long-term solution, we are rewriting the Dormant Users report to optimize how it queries for user activity, which will result in significantly faster and accurate report generation.
We are aware that the generation of the Dormant Users Report is delayed for some of our customers, and that the resulting report may be inaccurate. We are actively investigating the root cause and a possible remediation.
We are currently investigating this issue.
Disruption with some GitHub services
3 updates
On April 1st, 2025, between 08:17:00 UTC and 09:29:00 UTC the data store powering the Audit Log service experienced elevated errors resulting in an approximate 45 minute delay of Audit Log Events. Our systems maintained data continuity and we experienced no data loss. The delay only affected the Audit Log API and the Audit Log user interface. Any configured Audit Log Streaming endpoints received all relevant Audit Log Events. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.
The Audit Log is experiencing an increase of failed queries due to availability issues with the associated data store. Audit Log data is experiencing a delay in availability. We have identified the issue and we are deploying mitigating measures.
We are currently investigating this issue.