Skip to content

Tags: janogelin/orca

Tags

v7.16.0

Toggle v7.16.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
feat(core): Support deserializing `Execution.initialConfig` (spinnake…

…r#3019)

Also adds support for using `initialConfig.enabled` as a short circuit
of the `EnabledPipelineValidator` check.

This seems reasonable as a user can already submit pipeline json against
an arbitrary pipeline config id (that may or may not exist).

Providing an ability to explicitly override the enabled check seems
harmless.

v7.15.0

Toggle v7.15.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
fix(bake): Lookup artifact details from all upstream stages (spinnake…

…r#3011)

* fix(bake): Lookup artifact details from all upstream stages

Imagine following setup:
```
Pipeline A
 -> Jenkins job (produces foo.*.deb)
 -> Run Pipeline B
    -> Jenkins job (produces bar.*.deb)
    -> Bake package foo
```

In this case, the package lookup for bake stage will fail to lookup artifact
details for `foo.deb` to pass to the bakery. Thus the bakery is free to pick the
latest artifact matching the name which can be the wrong artifact.

This change allows package look up to traverse up parent pipeline stages, similar
to `FindImage` tasks

v7.14.0

Toggle v7.14.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
fix(webhooks): addresses issue 3450 - introduce a delay before pollin…

…g wehook (spinnaker#2984)

Add additional parameters to the monitored webhook to allow:
1. waiting some number of seconds before polling starts
2. retrying on specific HTTP status codes

Still needs deck counterpart

see [3450](spinnaker/spinnaker#3450)

v7.13.0

Toggle v7.13.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
fix(front50): Handle failures in pipeline config history lookup (spin…

…naker#3004)

v7.12.1

Toggle v7.12.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
fix(front50): Handle failures in pipeline config history lookup (spin…

…naker#3003)

v7.12.0

Toggle v7.12.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
feat(perf): Favor a single pipeline config lookup (spinnaker#3001)

Fetching a single pipeline config _should_ be more efficient than
fetching all pipelines (for an application) and filtering in
`orca`.

This is somewhat nuanced as the "all pipelines for application" is
likely being served out of the in-memory cache.

We have seen what looks like serialization-related load and this PR
aims to offer some relief.

version-2.7.5

Toggle version-2.7.5's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
fix(jobs): fix race condition in override (spinnaker#2994)

fixes a case where parameter overrides would override the base job
configuration as well as the job in the context. this is because the
previous implementation would assign a reference to context[it]
instead of a copy. when the parameter overriding was done it would not
only modify the fields in context but also the base configuration. this
presented itself when running > 1 job in parallel. by focing a new copy
(via converValue for simplicity) we get a fresh object reference we
can assign.

Note - manual patch from spinnaker#2988

version-2.6.4

Toggle version-2.6.4's commit message
perf(artifacts): use pageSize=1 when resolving prior artifacts (spinn…

…aker#2955) (spinnaker#2990)

This is a smaller-scoped attempt at
spinnaker#2938, which relates to
spinnaker/spinnaker#4367.

`ArtifactResolver.resolveArtifacts(Map)` currently attempts to load
_all_ executions for a pipeline in order to find the most recent
execution in the course of resolving expected artifacts. This causes a
lot of unnecessary data to be loaded from Redis/SQL, only to be
discarded a few operations later.

This change makes use of the fact that the ExecutionRepository
implementations will respect the `executionCriteria.pageSize` argument
when retrieving Execitions for a pipelineId.

In the Redis-based implementation, the executions are stored in a sorted
set scored on `buildTime` (or `currentTimeMillis()` if `buildTime` is
null), so retrieving all of the executions for the pipelineId with a
pageSize=1 should load the Execution with the most recent `buildTime`.

In the SQL-based implementation,
`retrievePipelinesForPipelineConfigId(String, ExecutionCriteria)` sorts
the query results based on the `id` field.

For both implementations, this is a small change from the existing
behavior of ArtifactResolver, which retrieves all executions and then
uses the one with the most recent `startTime` (or `id`). This change
seems like it will lead to the same result in most cases though, since
buildTime is set when the Execution is stored, and the `id` field is
ascending based on the timestamp of when it is generated.

The `retrievePipelinesForPipelienConfigId` method in both
implementations currently ignores the `executionCriteria.sortType`
field, but I've added this in the call from ArtifactResolver to at
least document ArtifactResolver's desire.

version-2.4.3

Toggle version-2.4.3's commit message
perf(artifacts): use pageSize=1 when resolving prior artifacts (spinn…

…aker#2955) (spinnaker#2991)

This is a smaller-scoped attempt at
spinnaker#2938, which relates to
spinnaker/spinnaker#4367.

`ArtifactResolver.resolveArtifacts(Map)` currently attempts to load
_all_ executions for a pipeline in order to find the most recent
execution in the course of resolving expected artifacts. This causes a
lot of unnecessary data to be loaded from Redis/SQL, only to be
discarded a few operations later.

This change makes use of the fact that the ExecutionRepository
implementations will respect the `executionCriteria.pageSize` argument
when retrieving Execitions for a pipelineId.

In the Redis-based implementation, the executions are stored in a sorted
set scored on `buildTime` (or `currentTimeMillis()` if `buildTime` is
null), so retrieving all of the executions for the pipelineId with a
pageSize=1 should load the Execution with the most recent `buildTime`.

In the SQL-based implementation,
`retrievePipelinesForPipelineConfigId(String, ExecutionCriteria)` sorts
the query results based on the `id` field.

For both implementations, this is a small change from the existing
behavior of ArtifactResolver, which retrieves all executions and then
uses the one with the most recent `startTime` (or `id`). This change
seems like it will lead to the same result in most cases though, since
buildTime is set when the Execution is stored, and the `id` field is
ascending based on the timestamp of when it is generated.

The `retrievePipelinesForPipelienConfigId` method in both
implementations currently ignores the `executionCriteria.sortType`
field, but I've added this in the call from ArtifactResolver to at
least document ArtifactResolver's desire.

v7.11.1

Toggle v7.11.1's commit message
fix(expressions): Whitelist `DayOfWeek` enum for expressions