Skip to content

Simplify port choosing logic#1747

Merged
nkubala merged 2 commits intoGoogleContainerTools:masterfrom
nkubala:port_forward_refactor
Mar 7, 2019
Merged

Simplify port choosing logic#1747
nkubala merged 2 commits intoGoogleContainerTools:masterfrom
nkubala:port_forward_refactor

Conversation

@nkubala
Copy link
Copy Markdown
Contributor

@nkubala nkubala commented Mar 6, 2019

this simplifies the logic for choosing a port slightly, by removing the map that keeps track of the ports we've already forwarded. this also moves out two port-related helper functions into the util package, for later reuse.

this will first check the 10 subsequent ports after the provided port for availability, to see if we can get close to the provided port. if not, we'll fall back to checking 4503-4533, and as a last resort we'll choose a random port.

Note: creating a listener on a port is a lightweight operation, so the overhead for checking if a listener is created successfully compared to the overhead for checking if an entry is in a map is marginal. the performance difference for this should be unnoticeable.

@codecov-io
Copy link
Copy Markdown

codecov-io commented Mar 7, 2019

Codecov Report

Merging #1747 into master will decrease coverage by 0.07%.
The diff coverage is 32.25%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1747      +/-   ##
==========================================
- Coverage   47.35%   47.27%   -0.08%     
==========================================
  Files         126      127       +1     
  Lines        6179     6170       -9     
==========================================
- Hits         2926     2917       -9     
- Misses       2955     2957       +2     
+ Partials      298      296       -2
Impacted Files Coverage Δ
pkg/skaffold/util/port.go 0% <0%> (ø)
pkg/skaffold/kubernetes/port_forward.go 41.34% <100%> (+2.54%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 75b6834...1298a20. Read the comment docs.

Copy link
Copy Markdown
Contributor

@balopat balopat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Notes:

  • I'm not completely convinced that "creating a listener on a port" is a lightweight operation as it creates a server that listens on a network socket...but given that we are talking about O(10) ports max this shouldn't cause an issue.
  • I'm thinking that we could keep that listener and use it as a lock until we actually forward the port using kubectl - WDYT?

@nkubala
Copy link
Copy Markdown
Contributor Author

nkubala commented Mar 7, 2019

@balopat yeah I should have been a little more specific, "lightweight" is a pretty subjective term. my reasoning was similar to yours: we're very likely not checking more than 30 ports total, and a quick benchmark gives me around 14µs per call to net.Listen() (plus listener.Close()), so this is negligible to the user.

I'm thinking that we could keep that listener and use it as a lock until we actually forward the port using kubectl

hmm, so you mean as a secondary check, since just because a port is free on the host doesn't mean it's going to be free on the cluster side? interesting idea but I'm not sure how much we gain from it. I'll leave it out of this PR but feel free and open another one and send it my way :)

@nkubala nkubala merged commit d8c5de6 into GoogleContainerTools:master Mar 7, 2019
@nkubala nkubala deleted the port_forward_refactor branch March 7, 2019 19:58
@balopat
Copy link
Copy Markdown
Contributor

balopat commented Mar 7, 2019

hmm, so you mean as a secondary check, since just because a port is free on the host doesn't mean it's going to be free on the cluster side? interesting idea but I'm not sure how much we gain from it. I'll leave it out of this PR but feel free and open another one and send it my way :)

No, I was rather thinking along the line that these actions are not atomic: check for free port and then allocate with kubectl port-forward. There is a chance that some other process would "sneak in" in-between, making the first check stale and the kubectl portforward fail. If we'd keep the socket around just until we are about to run kubectl port-forward, we could minimize this gap. It still isn't perfect, but could limit the probability of some annoying, obscure race conditions.

@briandealwis
Copy link
Copy Markdown
Member

I see this race condition with examples/jib-multimodule. With HEAD:

Deploy complete in 368.621615ms
Watching for changes every 1s...
Port Forwarding web1-6ff665dcc9-j4n6j/web1 8080 -> 8080
Port Forwarding web2-5b59d9db-4c74z/web2 8080 -> 8080

On v0.24.0:

Watching for changes every 1s...
Port Forwarding web1-6ff665dcc9-fhlwl/web1 8080 -> 8080
Forwarding container web2 to local port 4503.
Port Forwarding web2-5b59d9db-vwnpj/web2 8080 -> 4503

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants