this post was submitted on 20 Apr 2024
19 points (100.0% liked)

Git

2886 readers
20 users here now

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Resources

Rules

  1. Follow programming.dev rules
  2. Be excellent to each other, no hostility towards users for any reason
  3. No spam of tools/companies/advertisements. It’s OK to post your own stuff part of the time, but the primary use of the community should not be self-promotion.

Git Logo by Jason Long is licensed under the Creative Commons Attribution 3.0 Unported License.

founded 1 year ago
MODERATORS
top 3 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 7 months ago (1 children)

I do wonder if jumping back by twice as much is optimal.

It is. This is called exponential search, which maintains the O(log(n)) time complexity of binary search.

(Where n is the position of the commit you are searching for.)

[–] [email protected] 1 points 7 months ago* (last edited 7 months ago) (1 children)

You seem to be talking about binary search, but this is a search with an unbounded end.

I think the actual optimal thing would be just to take the first commit and bisect from there. But in practice this has downsides:

  1. Bugs that you are tracking down are typically biased towards recent commits.
  2. Older commits may not even be relevant to the bug (ex: before the feature was introduced)
  3. You may have trouble building older commits for various reasons.

So I think "optimal" is sort of fuzzy in this context, but I'm sure you could come up with various models for these variables and come up with something that is optimal for a given model. I haven't got around to doing that yet though.

[–] [email protected] 3 points 7 months ago* (last edited 7 months ago)

No, I'm talking about exponential search which is like binary search but unbounded end.

OP has implemented an exponential search.