[Design Proposal] Event API v2#1949
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1949 +/- ##
=======================================
Coverage 52.16% 52.16%
=======================================
Files 180 180
Lines 7963 7963
=======================================
Hits 4154 4154
Misses 3420 3420
Partials 389 389Continue to review full report at Codecov.
|
| } | ||
| ``` | ||
|
|
||
| The main drawback here is that for builds run in parallel, the statuses will not truly reflect reality until all builds are completed, but I don't believe users will see much of an effect from this. |
There was a problem hiding this comment.
I agree.
- Most of the time, the
artifactsToBuildwill just have one item - And even if there are more items, builds will usually succeed, which means the users are mainly interested in "when is the whole build cycle complete".
There was a problem hiding this comment.
umm i kind of disagree.
For microservices, usually # of artifactsToBuild is > 1.
Can we instead use go channels?
Build, Deploy will now consume a channel to write events or bRes to.
The runner will emit events for build artifacts that are written to the channel?
There was a problem hiding this comment.
@tejal29 I may be mistaken, but usually the build is triggered by the watcher, right? This means that the reason was some user change. And users usually make small confined changes in one source code file. Therefore I think that the queue should most of the time have just one dirty artifact (the one that edited file belongs to).
This of course does not apply to the first run where all artifacts are built.
There was a problem hiding this comment.
@corneliusweig good point. For dev loop this would not affect much.
Builds can be triggered by
- skaffold build
- skaffold run
- skaffold dev
In first 2 cases, parallel builds happen and events will be triggered at the end of all builds.
For dev,
Ideally, one file should be part of one artifact, however if a file belong to common library then all the artifacts may need to be rebuilt.
Consider a bazel example.
apiVersion: skaffold/v1beta8
kind: Config
build:
artifacts:
- image: gcr.io/k8s-skaffold/skaffold-bazel-frontend
bazel:
target: //:skaffold_example-frontend.tar
- image: gcr.io/k8s-skaffold/skaffold-bazel-backend
bazel:
target: //:skaffold_example-backend.tar
The targets skaffold_example-frontend and skaffold_example-backend depend on target //common-lib
In this case if user changes a file in common-lib, both of the images should trigger a build
There was a problem hiding this comment.
Yeah, this is the key contentious part in this redesign.
go channels would be definitely a correct solution to stream build events - but it feels like a very large scale change. That could be a follow-up - or maybe we can totally punt it until we see evidence around needing it - and go with the @corneliusweig's assumption that I think can be true as well that - except the first build - most of the actual builds are going to be single artifact builds.
Also, as I mentioned above, I agree that we should consult with the IDE teams on "batch" vs "streamed" build events.
There was a problem hiding this comment.
go channels would be definitely a correct solution to stream build events - but it feels like a very large scale change.
I don't agree "a very large scale change" should be a measure to punt or shelf functionality to implement for later.
We don't have evidence for our current assumption either which is "most of the times its a single artifact build"
Maybe our assumption is wrong.
That being said, we need arrive to a reasonable decision especially in this scenario where do not have data on how users are using skaffold, how long their build are, and hence stakeholders come in to picture.
We do need to make it sure, the consumers of Event API know that these events are now going to be batched.
There was a problem hiding this comment.
For the record - I did change my mind on this and pushed for keeping the "realtime" behavior in our conversations.
| if res.Error != nil { | ||
| event.BuildFailed(res.Target, res.Error) | ||
| } else { | ||
| event.BuildComplete(res.Target) |
There was a problem hiding this comment.
Who are consuming these events?
Is it the IDE team? If that is the case, we should get them involved as a stake holder.
There was a problem hiding this comment.
In general I agree, and they are actively testing the functionality.
This redesign is targeting the internal details and should not impact the functionality of the Event API. Not sure if it would be productive to involve them at this level, they are interfacing with the HTTP/gRPC level.
There was a problem hiding this comment.
Well, in this re-design we are changing the functionality a little bit since we are changing when the events get triggered.
There was a problem hiding this comment.
Actually - I take that back - the change in "streaming" vs "batch" events for multi-artifact builds is very much worthwhile to bring up to the IDE teams.
| * Author: Nick Kubala | ||
| * Design Shepherd: Any volunteers? :) | ||
| * Date: 04/11/19 | ||
| * Status: [Reviewed/Cancelled/Under implementation/Complete] |
There was a problem hiding this comment.
| * Status: [Reviewed/Cancelled/Under implementation/Complete] | |
| * Status: [Cancelled] |
balopat
left a comment
There was a problem hiding this comment.
Let's merge this in for historical decision tracking
Co-Authored-By: Balint Pato <balopat@users.noreply.github.com>
Slightly reworked version of #1854