forked from git/git
-
Notifications
You must be signed in to change notification settings - Fork 173
http: add support for HTTP 429 rate limit retries #2008
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
vaidas-shopify
wants to merge
2
commits into
gitgitgadget:master
Choose a base branch
from
vaidas-shopify:retry-after
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+750
−57
Open
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -315,6 +315,30 @@ http.keepAliveCount:: | |
| unset, curl's default value is used. Can be overridden by the | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Taylor Blau wrote on the Git mailing list (how to reply to this email): On Thu, Dec 18, 2025 at 02:44:47PM +0000, Vaidas Pilkauskas via GitGitGadget wrote:
> From: Vaidas Pilkauskas <vaidas.pilkauskas@shopify.com>
>
> Add retry logic for HTTP 429 (Too Many Requests) responses to handle
> server-side rate limiting gracefully. When Git's HTTP client receives
> a 429 response, it can now automatically retry the request after an
> appropriate delay, respecting the server's rate limits.
>
> The implementation supports the RFC-compliant Retry-After header in
> both delay-seconds (integer) and HTTP-date (RFC 2822) formats. If a
> past date is provided, Git retries immediately without waiting.
>
> Retry behavior is controlled by three new configuration options
> (http.maxRetries, http.retryAfter, and http.maxRetryTime) which are
> documented in git-config(1).
>
> The retry logic implements a fail-fast approach: if any delay
> (whether from server header or configuration) exceeds maxRetryTime,
> Git fails immediately with a clear error message rather than capping
> the delay. This provides better visibility into rate limiting issues.
>
> The implementation includes extensive test coverage for basic retry
> behavior, Retry-After header formats (integer and HTTP-date),
> configuration combinations, maxRetryTime limits, invalid header
> handling, environment variable overrides, and edge cases.
> +http.retryAfter::
> + Default wait time in seconds before retrying when a server returns
> + HTTP 429 (Too Many Requests) without a Retry-After header. If set
> + to -1 (the default), Git will fail immediately when encountering
While reviewing, I originally wrote:
Setting the default as "-1" makes sense to me. The current behavior is
to give up when we receive a HTTP 429 response with or without a
Retry-After header, so retaining that behavior makes sense and seems
like a sensible path.
, but I'm not sure that I am sold on that line of thinking. This is
controlling how long we'll wait after a 429 response before retrying,
not how many times we'll retry (which is `http.maxRetries` below).
Should the default here be zero? We would "retry" immediately, but that
retry would fail since the maximum retries is set to "zero" by default.
> diff --git a/http-push.c b/http-push.c
> index 60a9b75620..ddb9948352 100644
> --- a/http-push.c
> +++ b/http-push.c
> @@ -716,6 +716,10 @@ static int fetch_indices(void)
> case HTTP_MISSING_TARGET:
> ret = 0;
> break;
> + case HTTP_RATE_LIMITED:
> + error(_("rate limited by '%s', please try again later"), repo->url);
> + ret = -1;
> + break;
> default:
> ret = -1;
> }
> @@ -1548,6 +1552,10 @@ static int remote_exists(const char *path)
> case HTTP_MISSING_TARGET:
> ret = 0;
> break;
> + case HTTP_RATE_LIMITED:
> + error(_("rate limited by '%s', please try again later"), url);
> + ret = -1;
> + break;
I wonder if there is an opportunity to DRY this up a bit? I think the
case in fetch_indices() is very similar to remote_Exists(), and ditto
for fetch_indices() in the http-walker.c code.
The only exception I could see is http-walker.c's fetch_indices() needs
to also set repo->got_indices, but I think that could be done as a
separate pass.
If you end up going in that direction, I would suggest pulling out a
function as a preparatory commit before introducing the changes in this
patch so that you when you are ready to add the "rate limited by '%s'"
error(), you only have to do so once.
> -static size_t fwrite_wwwauth(char *ptr, size_t eltsize, size_t nmemb, void *p UNUSED)
> +static size_t fwrite_headers(char *ptr, size_t eltsize, size_t nmemb, void *p)
Thanks for making this change. I think that handling both
www-authenticate and retry-after headers in the same function makes a
lot of sense, and the new name reflects that appropriately.
> {
> size_t size = eltsize * nmemb;
> struct strvec *values = &http_auth.wwwauth_headers;
> struct strbuf buf = STRBUF_INIT;
> const char *val;
> size_t val_len;
> + struct active_request_slot *slot = (struct active_request_slot *)p;
>
> /*
> * Header lines may not come NULL-terminated from libcurl so we must
> @@ -257,6 +264,47 @@ static size_t fwrite_wwwauth(char *ptr, size_t eltsize, size_t nmemb, void *p UN
> goto exit;
> }
>
> + /* Parse Retry-After header for rate limiting */
> + if (skip_iprefix_mem(ptr, size, "retry-after:", &val, &val_len)) {
> + strbuf_add(&buf, val, val_len);
> + strbuf_trim(&buf);
> +
> + if (slot && slot->results) {
> + /* Parse the retry-after value (delay-seconds or HTTP-date) */
> + char *endptr;
> + long retry_after;
> +
> + errno = 0;
> + retry_after = strtol(buf.buf, &endptr, 10);
> +
> + /* Check if it's a valid integer (delay-seconds format) */
> + if (endptr != buf.buf && *endptr == '\0' &&
> + errno != ERANGE && retry_after > 0) {
Should we handle "Retry-After: 0" here? I think that this means "retry
immediately", so I imagine that we should change this to read "&&
retry_after >= 0" instead.
> + slot->results->retry_after = retry_after;
> + } else {
> + /* Try parsing as HTTP-date format */
> + timestamp_t timestamp;
> + int offset;
> + if (!parse_date_basic(buf.buf, ×tamp, &offset)) {
> + /* Successfully parsed as date, calculate delay from now */
> + timestamp_t now = time(NULL);
> + if (timestamp > now) {
> + slot->results->retry_after = (long)(timestamp - now);
> + } else {
> + /* Past date means retry immediately */
> + slot->results->retry_after = 0;
> + }
> + } else {
> + /* Failed to parse as either delay-seconds or HTTP-date */
> + warning(_("unable to parse Retry-After header value: '%s'"), buf.buf);
> + }
> + }
> + }
> +
> + http_auth.header_is_last_match = 1;
Could you help me understand why we're setting header_is_last_match
here? I think since we immediately "goto exit" this line isn't strictly
necessary.
As a separate but related note, I don't know if this function properly
handles header continuations for Retry-After headers, but in practice I
suspect it doesn't matter, as servers should not be continuing
Retry-After headers across multiple lines.
> @@ -1660,44 +1729,98 @@ void run_active_slot(struct active_request_slot *slot)
> fd_set excfds;
> int max_fd;
> struct timeval select_timeout;
> + long curl_timeout;
> + struct timeval start_time = {0}, current_time, elapsed_time = {0};
> + long remaining_seconds;
> int finished = 0;
> + int slot_not_started = (slot->finished == NULL);
> + int waiting_for_delay = (slot->retry_delay_seconds > 0);
> +
> + if (waiting_for_delay) {
> + warning(_("rate limited, waiting %ld seconds before retry"), slot->retry_delay_seconds);
> + start_time = slot->retry_delay_start;
> + }
>
> slot->finished = &finished;
> - while (!finished) {
> + while (waiting_for_delay || !finished) {
> + if (waiting_for_delay) {
> + gettimeofday(¤t_time, NULL);
> + elapsed_time.tv_sec = current_time.tv_sec - start_time.tv_sec;
> + elapsed_time.tv_usec = current_time.tv_usec - start_time.tv_usec;
> + if (elapsed_time.tv_usec < 0) {
> + elapsed_time.tv_sec--;
> + elapsed_time.tv_usec += 1000000;
> + }
> +
> + if (elapsed_time.tv_sec >= slot->retry_delay_seconds) {
> + slot->retry_delay_seconds = -1;
> + waiting_for_delay = 0;
> +
> + if (slot_not_started)
> + return;
I wonder if run_active_slot() is the right place for these changes or if
it should be handled separately. I think it may be somewhat surprising
for run_active_slot() to return without actually running the slot, even
if the slot is marked as "active" but just waiting for a delay.
OTOH, like I mentioned earlier, I am far from an expert in this part of
the code, so perhaps this is totally OK. shortlog says that Peff (CC'd)
is among the most active contributors to this file in the past year, so
I'll be curious what he thinks as well.
> @@ -1871,6 +1994,8 @@ static int handle_curl_result(struct slot_results *results)
> }
> return HTTP_REAUTH;
> }
> + } else if (results->http_code == 429) {
> + return HTTP_RATE_LIMITED;
> } else {
> if (results->http_connectcode == 407)
> credential_reject(the_repository, &proxy_auth);
> @@ -1886,6 +2011,14 @@ int run_one_slot(struct active_request_slot *slot,
> struct slot_results *results)
> {
> slot->results = results;
> + /* Initialize retry_after to -1 (not set) */
> + results->retry_after = -1;
> +
> + /* If there's a retry delay, wait for it before starting the slot */
> + if (slot->retry_delay_seconds > 0) {
> + run_active_slot(slot);
> + }
This is a nitpick, but the curly braces here are unnecessary for a
single-line if statement. Documentation/CodingGuidelines has more
details here.
> +
> if (!start_active_slot(slot)) {
> xsnprintf(curl_errorstr, sizeof(curl_errorstr),
> "failed to start HTTP request");
> @@ -2117,9 +2250,13 @@ static void http_opt_request_remainder(CURL *curl, off_t pos)
> #define HTTP_REQUEST_STRBUF 0
> #define HTTP_REQUEST_FILE 1
>
> +static void sleep_for_retry(struct active_request_slot *slot, long retry_after);
> +
> static int http_request(const char *url,
> void *result, int target,
> - const struct http_get_options *options)
> + const struct http_get_options *options,
> + long *retry_after_out,
> + long retry_delay)
> {
> struct active_request_slot *slot;
> struct slot_results results;
> @@ -2129,6 +2266,10 @@ static int http_request(const char *url,
> int ret;
>
> slot = get_active_slot();
> + /* Mark slot for delay if retry delay is provided */
> + if (retry_delay > 0) {
> + sleep_for_retry(slot, retry_delay);
> + }
Same note here as above.
> +/*
> + * Handle rate limiting retry logic for HTTP 429 responses.
> + * Uses slot-specific retry_after value to support concurrent slots.
> + * Returns a negative value if retries are exhausted or configuration is invalid,
> + * otherwise returns the delay value (>= 0) to indicate the retry should proceed.
> + */
> +static long handle_rate_limit_retry(int *rate_limit_retries, long slot_retry_after)
> +{
> + int retry_attempt = http_max_retries - *rate_limit_retries + 1;
> + if (*rate_limit_retries <= 0) {
> + /* Retries are disabled or exhausted */
> + if (http_max_retries > 0) {
> + error(_("too many rate limit retries, giving up"));
> + }
Here as well.
> + return -1;
> + }
> +
> + /* Decrement retries counter */
> + (*rate_limit_retries)--;
> +
> + /* Use the slot-specific retry_after value or configured default */
> + if (slot_retry_after >= 0) {
> + /* Check if retry delay exceeds maximum allowed */
> + if (slot_retry_after > http_max_retry_time) {
> + error(_("rate limited (HTTP 429) requested %ld second delay, "
> + "exceeds http.maxRetryTime of %ld seconds"),
> + slot_retry_after, http_max_retry_time);
> + return -1;
> + }
> + return slot_retry_after;
> + } else {
> + /* No Retry-After header provided */
> + if (http_retry_after < 0) {
> + /* Not configured - exit with error */
> + error(_("rate limited (HTTP 429) and no Retry-After header provided. "
> + "Configure http.retryAfter or set GIT_HTTP_RETRY_AFTER."));
> + return -1;
> + }
> + /* Check if configured default exceeds maximum allowed */
> + if (http_retry_after > http_max_retry_time) {
> + error(_("configured http.retryAfter (%ld seconds) exceeds "
> + "http.maxRetryTime (%ld seconds)"),
> + http_retry_after, http_max_retry_time);
> + return -1;
> + }
As a general note on these error()s, I wonder if it would be worth
shortening them up a bit. For example, the first one reads:
"rate limited (HTTP 429) requested %ld second delay, exceeds http.maxRetryTime of %ld seconds"
Perhaps we could shorten this to something like:
"response requested a delay greater than http.maxRetryTime (%ld > %ld seconds)"
I feel like we could get it even shorter, but I think that this is a
good starting point.
As an additional note, I think we generally try and avoid putting
instructions like "Configure http.retryAfter or [...]" in error()
messages. Those would be good advise() messages, enabling the user to
turn them off if they are not relevant to their situation, whereas
error() messages are fixed.
> +static int http_request_recoverable(const char *url,
> void *result, int target,
> struct http_get_options *options)
> {
> int i = 3;
> int ret;
> + int rate_limit_retries = http_max_retries;
> + long slot_retry_after = -1; /* Per-slot retry_after value */
>
> if (always_auth_proactively())
> credential_fill(the_repository, &http_auth, 1);
>
> - ret = http_request(url, result, target, options);
> + ret = http_request(url, result, target, options, &slot_retry_after, -1);
>
> - if (ret != HTTP_OK && ret != HTTP_REAUTH)
> + if (ret != HTTP_OK && ret != HTTP_REAUTH && ret != HTTP_RATE_LIMITED)
> return ret;
>
> + /* If retries are disabled and we got a 429, fail immediately */
> + if (ret == HTTP_RATE_LIMITED && http_max_retries == 0)
Another minor CodingGuidelines nit, but we generally do not write "x ==
0", and instead prefer "!x".
> + return HTTP_ERROR;
> +
> if (options && options->effective_url && options->base_url) {
> if (update_url_from_redirect(options->base_url,
> url, options->effective_url)) {
> @@ -2276,7 +2491,8 @@ static int http_request_reauth(const char *url,
> }
> }
>
> - while (ret == HTTP_REAUTH && --i) {
> + while ((ret == HTTP_REAUTH || ret == HTTP_RATE_LIMITED) && --i) {
I had to re-read this line, since I wasn't sure that decrementing i was
the right thing to do for both reauth and rate limited responses. But it
is, since we pass a pointer to rate_limit_retries down to
handle_rate_limit_retry() which will decrement it and eventually cause
it to return -1 when retries are exhausted, causing this loop to exit.
> static int get_protocol_http_header(enum protocol_version version,
> @@ -518,21 +529,25 @@ static struct discovery *discover_refs(const char *service, int for_push)
> case HTTP_OK:
> break;
> case HTTP_MISSING_TARGET:
> - show_http_message(&type, &charset, &buffer);
> - die(_("repository '%s' not found"),
> - transport_anonymize_url(url.buf));
> + show_http_message_fatal(&type, &charset, &buffer,
> + _("repository '%s' not found"),
> + transport_anonymize_url(url.buf));
Thanks for taking my suggestion here as well. I think that the end
result reads much cleaner, though I do think that introducing the new
show_http_message_fatal() function and rewriting the existing code
should happen in a preparatory commit before this one to more clearly
separate the changes.
> diff --git a/strbuf.c b/strbuf.c
> index 6c3851a7f8..1d3860869e 100644
> --- a/strbuf.c
> +++ b/strbuf.c
> @@ -168,7 +168,7 @@ int strbuf_reencode(struct strbuf *sb, const char *from, const char *to)
> if (!out)
> return -1;
>
> - strbuf_attach(sb, out, len, len);
> + strbuf_attach(sb, out, len, len + 1);
Not sure that I'm following this change.
> diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh
> index 5091db949b..8a43261ffc 100644
> --- a/t/lib-httpd.sh
> +++ b/t/lib-httpd.sh
I may solicit Peff's input here on the remainder of the test changes,
since he is much more familiar with the lib-httpd parts of the suite
than I am.
Thanks,
TaylorThere was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Jeff King wrote on the Git mailing list (how to reply to this email): On Tue, Feb 10, 2026 at 08:05:29PM -0500, Taylor Blau wrote:
> > diff --git a/http-push.c b/http-push.c
> > index 60a9b75620..ddb9948352 100644
> > --- a/http-push.c
> > +++ b/http-push.c
> > @@ -716,6 +716,10 @@ static int fetch_indices(void)
> > case HTTP_MISSING_TARGET:
> > ret = 0;
> > break;
> > + case HTTP_RATE_LIMITED:
> > + error(_("rate limited by '%s', please try again later"), repo->url);
> > + ret = -1;
> > + break;
> > default:
> > ret = -1;
> > }
> > @@ -1548,6 +1552,10 @@ static int remote_exists(const char *path)
> > case HTTP_MISSING_TARGET:
> > ret = 0;
> > break;
> > + case HTTP_RATE_LIMITED:
> > + error(_("rate limited by '%s', please try again later"), url);
> > + ret = -1;
> > + break;
>
> I wonder if there is an opportunity to DRY this up a bit? I think the
> case in fetch_indices() is very similar to remote_Exists(), and ditto
> for fetch_indices() in the http-walker.c code.
IMHO it is not worth trying to clean up http-push here. It's the dumb
push-over-webdav implementation that nobody uses. I'd actually be happy
to see it ripped out, but am too lazy to go through the effort of a big
deprecation period myself.
So I would actually consider not touching this code at all, and letting
it continue to behave as it did before (returning -1 and not producing
any specialized message). Though I suppose in remote_exists() we'd fail
to even print the curl error anymore, which would be a regression.
Ditto for http-walker.c's fetch_indices() function. It is used only for
dumb-http fetches (which are forbidden by most forges). And if not
touched at all, it would continue to function in the same way (not
producing any specialized message).
> As a separate but related note, I don't know if this function properly
> handles header continuations for Retry-After headers, but in practice I
> suspect it doesn't matter, as servers should not be continuing
> Retry-After headers across multiple lines.
Yeah, I noticed that, too. And all of the parsing actually makes me
nervous. Surely curl can do some of this for us?
...studies some manpages...
Ah, indeed. How about:
curl_off_t wait = 0;
curl_easy_getinfo(slot->curl, CURLINFO_RETRY_AFTER, &wait);
You can see how we already dig out similar info in finish_active_slot().
And more extended (but optional) info in http_request(). It looks like
CURLINFO_RETRY_AFTER was added in 7.66.0, so this would have to be a
conditional feature at build-time. But that seems like a reasonable
trade-off.
Side note: the obvious question is why we need fwrite_wwwauth() in the
first place. And the answer is that curl does not provide structured
access to the information from those headers. It does make me wonder
if we could be using curl_easy_header() to get rid of all of this
manual parsing and continuation code. That was introduced in 7.83.0,
which would again make it conditional. But it seems like a nicer path
forward for us. Anyway, way out of scope for this patch.
> > @@ -1660,44 +1729,98 @@ void run_active_slot(struct active_request_slot *slot)
> [...]
> > - while (!finished) {
> > + while (waiting_for_delay || !finished) {
> > + if (waiting_for_delay) {
> > + gettimeofday(¤t_time, NULL);
> > + elapsed_time.tv_sec = current_time.tv_sec - start_time.tv_sec;
> > + elapsed_time.tv_usec = current_time.tv_usec - start_time.tv_usec;
> > + if (elapsed_time.tv_usec < 0) {
> > + elapsed_time.tv_sec--;
> > + elapsed_time.tv_usec += 1000000;
> > + }
> > +
> > + if (elapsed_time.tv_sec >= slot->retry_delay_seconds) {
> > + slot->retry_delay_seconds = -1;
> > + waiting_for_delay = 0;
> > +
> > + if (slot_not_started)
> > + return;
>
> I wonder if run_active_slot() is the right place for these changes or if
> it should be handled separately. I think it may be somewhat surprising
> for run_active_slot() to return without actually running the slot, even
> if the slot is marked as "active" but just waiting for a delay.
Yeah, I agree. The point of run_active_slot() is to run the slot to
completion (I think; it has been a while since I've had to dig into any
of this). So I'd either expect it to handle the retry and delay itself
internally, or to return the failed request to the caller, who will then
delay and initiate the retry.
That's all assuming we're making one request at a time (which I think is
mostly all that run_active_slot() handles). There's a much more
complicated question when we have multiple simultaneous requests, which
we'd do only with the dumb protocol (trying to fetch multiple objects at
once). In that case we need to be queuing requests. And I _think_ that
might be what this code is trying to do. But I'm not sure if it would
actually work, as we try to advance those via step_active_slots().
> OTOH, like I mentioned earlier, I am far from an expert in this part of
> the code, so perhaps this is totally OK. shortlog says that Peff (CC'd)
> is among the most active contributors to this file in the past year, so
> I'll be curious what he thinks as well.
Most of the details of this active slot stuff have long been paged out
of my memory. It's all _so_ messy because of the desire for the
dumb-http code to handle multiple requests. But for smart-http (and I
would be perfectly content for this feature to only apply there), we
could probably just focus on run_one_slot(), I'd think.
I.e., what I'd expect the simplest form of the patch to look like is
roughly:
- teach handle_curl_result() to recognize 429 and pull out the
retry-after value, returning HTTP_RETRY
- in run_one_slot(), recognize HTTP_RETRY and if appropriate, sleep
and retry
I do wonder if even that might be too low-level, though. For a real
large request, we'll be streaming data into the request, and I'm not
sure we _can_ retry. We send a probe_rpc() first in that case to try to
resolve issues like credential-filling. But there's nothing to say that
we can't get a 200 on the probe and a 429 on the real request.
Which I guess implies to me that http_request_reauth() should be where
the magic happens. And it somewhat does in this patch, but...why not do
the sleeping there, and why push it all the way down into
run_active_slot()?
I know I'm kind of talking in circles here, which is indicative of my
confusion (and the general complexity of the http code). But as the
patch stands, I'm not really convinced which cases it is trying to cover
(single requests vs multi, repeatable requests vs streaming POSTs), how
well it covers them, and that it is doing it as simply as possible (or
at least keeping the logic together).
> > @@ -518,21 +529,25 @@ static struct discovery *discover_refs(const char *service, int for_push)
> > case HTTP_OK:
> > break;
> > case HTTP_MISSING_TARGET:
> > - show_http_message(&type, &charset, &buffer);
> > - die(_("repository '%s' not found"),
> > - transport_anonymize_url(url.buf));
> > + show_http_message_fatal(&type, &charset, &buffer,
> > + _("repository '%s' not found"),
> > + transport_anonymize_url(url.buf));
>
> Thanks for taking my suggestion here as well. I think that the end
> result reads much cleaner, though I do think that introducing the new
> show_http_message_fatal() function and rewriting the existing code
> should happen in a preparatory commit before this one to more clearly
> separate the changes.
Yeah, I had the same thought.
> > diff --git a/t/lib-httpd.sh b/t/lib-httpd.sh
> > index 5091db949b..8a43261ffc 100644
> > --- a/t/lib-httpd.sh
> > +++ b/t/lib-httpd.sh
>
> I may solicit Peff's input here on the remainder of the test changes,
> since he is much more familiar with the lib-httpd parts of the suite
> than I am.
The lib-httpd parts looked about as I'd expect (and I found the use of
custom URL components to encode the retry parameters quite clever).
There were lots of uses of "date" that I suspect may give us portability
problems. "+%s" is not even in POSIX, but maybe it is universal enough.
But stuff like '-d "+2 seconds"' seems likely to be a GNU-ism.
Using "test-tool date" might get around some of that. We even understand
relative dates like "2 seconds ago", but I think only in the past. :-/
So you'd probably have to do:
now=$(test-tool date timestamp now | cut -d' ' -f3)
then=$((now + 2))
test-tool date show:rfc2822 $then
or something.
-Peff |
||
| `GIT_HTTP_KEEPALIVE_COUNT` environment variable. | ||
|
|
||
| http.retryAfter:: | ||
| Default wait time in seconds before retrying when a server returns | ||
| HTTP 429 (Too Many Requests) without a Retry-After header. If set | ||
| to -1 (the default), Git will fail immediately when encountering | ||
| a 429 response without a Retry-After header. When a Retry-After | ||
| header is present, its value takes precedence over this setting. | ||
| Can be overridden by the `GIT_HTTP_RETRY_AFTER` environment variable. | ||
| See also `http.maxRetries` and `http.maxRetryTime`. | ||
|
|
||
| http.maxRetries:: | ||
| Maximum number of times to retry after receiving HTTP 429 (Too Many | ||
| Requests) responses. Set to 0 (the default) to disable retries. | ||
| Can be overridden by the `GIT_HTTP_MAX_RETRIES` environment variable. | ||
| See also `http.retryAfter` and `http.maxRetryTime`. | ||
|
|
||
| http.maxRetryTime:: | ||
| Maximum time in seconds to wait for a single retry attempt when | ||
| handling HTTP 429 (Too Many Requests) responses. If the server | ||
| requests a delay (via Retry-After header) or if `http.retryAfter` | ||
| is configured with a value that exceeds this maximum, Git will fail | ||
| immediately rather than waiting. Default is 300 seconds (5 minutes). | ||
| Can be overridden by the `GIT_HTTP_MAX_RETRY_TIME` environment | ||
| variable. See also `http.retryAfter` and `http.maxRetries`. | ||
|
|
||
| http.noEPSV:: | ||
| A boolean which disables using of EPSV ftp command by curl. | ||
| This can be helpful with some "poor" ftp servers which don't | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Taylor Blau wrote (reply to this):
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the Git mailing list, Vaidas Pilkauskas wrote (reply to this):