cmd/k8s-operator/e2e: run self-contained e2e tests with devcontrol (#17415)

* cmd/k8s-operator/e2e: run self-contained e2e tests with devcontrol

Adds orchestration for more of the e2e testing setup requirements to
make it easier to run them in CI, but also run them locally in a way
that's consistent with CI. Requires running devcontrol, but otherwise
supports creating all the scaffolding required to exercise the operator
and proxies.

Updates tailscale/corp#32085

Change-Id: Ia7bff38af3801fd141ad17452aa5a68b7e724ca6
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>

* cmd/k8s-operator/e2e: being more specific on tmp dir cleanup

Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>

---------

Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Signed-off-by: chaosinthecrd <tom@tmlabs.co.uk>
Co-authored-by: chaosinthecrd <tom@tmlabs.co.uk>
This commit is contained in:
Tom Proctor
2026-01-08 12:01:12 +00:00
committed by GitHub
parent 522a6e385e
commit 73cb3b491e
18 changed files with 1680 additions and 331 deletions
+48 -48
View File
@@ -25,6 +25,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/evanphx/json-patch/v5 from sigs.k8s.io/controller-runtime/pkg/client
github.com/evanphx/json-patch/v5/internal/json from github.com/evanphx/json-patch/v5
💣 github.com/fsnotify/fsnotify from sigs.k8s.io/controller-runtime/pkg/certwatcher
github.com/fsnotify/fsnotify/internal from github.com/fsnotify/fsnotify
github.com/fxamacker/cbor/v2 from tailscale.com/tka+
github.com/gaissmai/bart from tailscale.com/net/ipset+
github.com/gaissmai/bart/internal/bitset from github.com/gaissmai/bart+
@@ -46,27 +47,18 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
💣 github.com/gogo/protobuf/proto from k8s.io/api/admission/v1+
github.com/gogo/protobuf/sortkeys from k8s.io/api/admission/v1+
github.com/golang/groupcache/lru from tailscale.com/net/dnscache
github.com/golang/protobuf/proto from k8s.io/client-go/discovery+
github.com/google/btree from gvisor.dev/gvisor/pkg/tcpip/header+
github.com/google/gnostic-models/compiler from github.com/google/gnostic-models/openapiv2+
github.com/google/gnostic-models/extensions from github.com/google/gnostic-models/compiler
github.com/google/gnostic-models/jsonschema from github.com/google/gnostic-models/compiler
github.com/google/gnostic-models/openapiv2 from k8s.io/client-go/discovery+
github.com/google/gnostic-models/openapiv3 from k8s.io/kube-openapi/pkg/handler3+
💣 github.com/google/go-cmp/cmp from k8s.io/apimachinery/pkg/util/diff+
github.com/google/go-cmp/cmp/internal/diff from github.com/google/go-cmp/cmp
github.com/google/go-cmp/cmp/internal/flags from github.com/google/go-cmp/cmp+
github.com/google/go-cmp/cmp/internal/function from github.com/google/go-cmp/cmp
💣 github.com/google/go-cmp/cmp/internal/value from github.com/google/go-cmp/cmp
github.com/google/gofuzz from k8s.io/apimachinery/pkg/apis/meta/v1+
github.com/google/gofuzz/bytesource from github.com/google/gofuzz
github.com/google/uuid from github.com/prometheus-community/pro-bing+
github.com/hdevalence/ed25519consensus from tailscale.com/tka
W 💣 github.com/inconshreveable/mousetrap from github.com/spf13/cobra
github.com/josharian/intern from github.com/mailru/easyjson/jlexer
L 💣 github.com/jsimonetti/rtnetlink from tailscale.com/net/netmon
L github.com/jsimonetti/rtnetlink/internal/unix from github.com/jsimonetti/rtnetlink
💣 github.com/json-iterator/go from sigs.k8s.io/structured-merge-diff/v4/fieldpath+
💣 github.com/json-iterator/go from sigs.k8s.io/structured-merge-diff/v6/fieldpath+
github.com/klauspost/compress from github.com/klauspost/compress/zstd
github.com/klauspost/compress/fse from github.com/klauspost/compress/huff0
github.com/klauspost/compress/huff0 from github.com/klauspost/compress/zstd
@@ -88,6 +80,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/opencontainers/go-digest from github.com/distribution/reference
github.com/pires/go-proxyproto from tailscale.com/ipn/ipnlocal
github.com/pkg/errors from github.com/evanphx/json-patch/v5+
github.com/pmezard/go-difflib/difflib from k8s.io/apimachinery/pkg/util/diff
D github.com/prometheus-community/pro-bing from tailscale.com/wgengine/netstack
github.com/prometheus/client_golang/internal/github.com/golang/gddo/httputil from github.com/prometheus/client_golang/prometheus/promhttp
github.com/prometheus/client_golang/internal/github.com/golang/gddo/httputil/header from github.com/prometheus/client_golang/internal/github.com/golang/gddo/httputil
@@ -103,7 +96,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
LD github.com/prometheus/procfs/internal/fs from github.com/prometheus/procfs
LD github.com/prometheus/procfs/internal/util from github.com/prometheus/procfs
L 💣 github.com/safchain/ethtool from tailscale.com/net/netkernelconf
github.com/spf13/cobra from k8s.io/component-base/cli/flag
github.com/spf13/pflag from k8s.io/client-go/tools/clientcmd+
W 💣 github.com/tailscale/certstore from tailscale.com/control/controlclient
W 💣 github.com/tailscale/go-winio from tailscale.com/safesocket
@@ -131,12 +123,14 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
github.com/tailscale/wireguard-go/tai64n from github.com/tailscale/wireguard-go/device
💣 github.com/tailscale/wireguard-go/tun from github.com/tailscale/wireguard-go/device+
github.com/x448/float16 from github.com/fxamacker/cbor/v2
go.opentelemetry.io/otel/attribute from go.opentelemetry.io/otel/trace
go.opentelemetry.io/otel/attribute from go.opentelemetry.io/otel/trace+
go.opentelemetry.io/otel/codes from go.opentelemetry.io/otel/trace
💣 go.opentelemetry.io/otel/internal from go.opentelemetry.io/otel/attribute
go.opentelemetry.io/otel/internal/attribute from go.opentelemetry.io/otel/attribute
go.opentelemetry.io/otel/semconv/v1.26.0 from go.opentelemetry.io/otel/trace
go.opentelemetry.io/otel/trace from k8s.io/component-base/metrics
go.opentelemetry.io/otel/trace/embedded from go.opentelemetry.io/otel/trace
💣 go.opentelemetry.io/otel/trace/internal/telemetry from go.opentelemetry.io/otel/trace
go.uber.org/multierr from go.uber.org/zap+
go.uber.org/zap from github.com/go-logr/zapr+
go.uber.org/zap/buffer from go.uber.org/zap/internal/bufferpool+
@@ -147,19 +141,20 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
go.uber.org/zap/internal/pool from go.uber.org/zap+
go.uber.org/zap/internal/stacktrace from go.uber.org/zap
go.uber.org/zap/zapcore from github.com/go-logr/zapr+
go.yaml.in/yaml/v2 from k8s.io/kube-openapi/pkg/util/proto+
go.yaml.in/yaml/v3 from github.com/google/gnostic-models/compiler+
💣 go4.org/mem from tailscale.com/client/local+
go4.org/netipx from tailscale.com/ipn/ipnlocal+
W 💣 golang.zx2c4.com/wintun from github.com/tailscale/wireguard-go/tun
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/dns+
gomodules.xyz/jsonpatch/v2 from sigs.k8s.io/controller-runtime/pkg/webhook+
google.golang.org/protobuf/encoding/protodelim from github.com/prometheus/common/expfmt
google.golang.org/protobuf/encoding/prototext from github.com/golang/protobuf/proto+
google.golang.org/protobuf/encoding/protowire from github.com/golang/protobuf/proto+
google.golang.org/protobuf/encoding/prototext from github.com/prometheus/common/expfmt+
google.golang.org/protobuf/encoding/protowire from google.golang.org/protobuf/encoding/protodelim+
google.golang.org/protobuf/internal/descfmt from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/descopts from google.golang.org/protobuf/internal/filedesc+
google.golang.org/protobuf/internal/detrand from google.golang.org/protobuf/internal/descfmt+
google.golang.org/protobuf/internal/editiondefaults from google.golang.org/protobuf/internal/filedesc+
google.golang.org/protobuf/internal/editionssupport from google.golang.org/protobuf/reflect/protodesc
google.golang.org/protobuf/internal/editiondefaults from google.golang.org/protobuf/internal/filedesc
google.golang.org/protobuf/internal/encoding/defval from google.golang.org/protobuf/internal/encoding/tag+
google.golang.org/protobuf/internal/encoding/messageset from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/encoding/tag from google.golang.org/protobuf/internal/impl
@@ -176,19 +171,17 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
google.golang.org/protobuf/internal/set from google.golang.org/protobuf/encoding/prototext
💣 google.golang.org/protobuf/internal/strs from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/internal/version from google.golang.org/protobuf/runtime/protoimpl
google.golang.org/protobuf/proto from github.com/golang/protobuf/proto+
google.golang.org/protobuf/reflect/protodesc from github.com/golang/protobuf/proto
💣 google.golang.org/protobuf/reflect/protoreflect from github.com/golang/protobuf/proto+
google.golang.org/protobuf/reflect/protoregistry from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoiface from github.com/golang/protobuf/proto+
google.golang.org/protobuf/runtime/protoimpl from github.com/golang/protobuf/proto+
💣 google.golang.org/protobuf/types/descriptorpb from github.com/google/gnostic-models/openapiv3+
💣 google.golang.org/protobuf/types/gofeaturespb from google.golang.org/protobuf/reflect/protodesc
google.golang.org/protobuf/proto from github.com/google/gnostic-models/compiler+
💣 google.golang.org/protobuf/reflect/protoreflect from github.com/google/gnostic-models/extensions+
google.golang.org/protobuf/reflect/protoregistry from google.golang.org/protobuf/encoding/prototext+
google.golang.org/protobuf/runtime/protoiface from google.golang.org/protobuf/internal/impl+
google.golang.org/protobuf/runtime/protoimpl from github.com/google/gnostic-models/extensions+
💣 google.golang.org/protobuf/types/descriptorpb from github.com/google/gnostic-models/openapiv3
💣 google.golang.org/protobuf/types/known/anypb from github.com/google/gnostic-models/compiler+
💣 google.golang.org/protobuf/types/known/timestamppb from github.com/prometheus/client_golang/prometheus+
gopkg.in/evanphx/json-patch.v4 from k8s.io/client-go/testing
gopkg.in/inf.v0 from k8s.io/apimachinery/pkg/api/resource
gopkg.in/yaml.v3 from github.com/go-openapi/swag+
gopkg.in/yaml.v3 from github.com/go-openapi/swag
gvisor.dev/gvisor/pkg/atomicbitops from gvisor.dev/gvisor/pkg/buffer+
gvisor.dev/gvisor/pkg/bits from gvisor.dev/gvisor/pkg/buffer
💣 gvisor.dev/gvisor/pkg/buffer from gvisor.dev/gvisor/pkg/tcpip+
@@ -269,7 +262,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/api/flowcontrol/v1beta2 from k8s.io/client-go/applyconfigurations/flowcontrol/v1beta2+
k8s.io/api/flowcontrol/v1beta3 from k8s.io/client-go/applyconfigurations/flowcontrol/v1beta3+
k8s.io/api/networking/v1 from k8s.io/client-go/applyconfigurations/networking/v1+
k8s.io/api/networking/v1alpha1 from k8s.io/client-go/applyconfigurations/networking/v1alpha1+
k8s.io/api/networking/v1beta1 from k8s.io/client-go/applyconfigurations/networking/v1beta1+
k8s.io/api/node/v1 from k8s.io/client-go/applyconfigurations/node/v1+
k8s.io/api/node/v1alpha1 from k8s.io/client-go/applyconfigurations/node/v1alpha1+
@@ -279,8 +271,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/api/rbac/v1 from k8s.io/client-go/applyconfigurations/rbac/v1+
k8s.io/api/rbac/v1alpha1 from k8s.io/client-go/applyconfigurations/rbac/v1alpha1+
k8s.io/api/rbac/v1beta1 from k8s.io/client-go/applyconfigurations/rbac/v1beta1+
k8s.io/api/resource/v1 from k8s.io/client-go/applyconfigurations/resource/v1+
k8s.io/api/resource/v1alpha3 from k8s.io/client-go/applyconfigurations/resource/v1alpha3+
k8s.io/api/resource/v1beta1 from k8s.io/client-go/applyconfigurations/resource/v1beta1+
k8s.io/api/resource/v1beta2 from k8s.io/client-go/applyconfigurations/resource/v1beta2+
k8s.io/api/scheduling/v1 from k8s.io/client-go/applyconfigurations/scheduling/v1+
k8s.io/api/scheduling/v1alpha1 from k8s.io/client-go/applyconfigurations/scheduling/v1alpha1+
k8s.io/api/scheduling/v1beta1 from k8s.io/client-go/applyconfigurations/scheduling/v1beta1+
@@ -294,16 +288,20 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/apimachinery/pkg/api/errors from k8s.io/apimachinery/pkg/util/managedfields/internal+
k8s.io/apimachinery/pkg/api/meta from k8s.io/apimachinery/pkg/api/validation+
k8s.io/apimachinery/pkg/api/meta/testrestmapper from k8s.io/client-go/testing
k8s.io/apimachinery/pkg/api/operation from k8s.io/api/extensions/v1beta1+
k8s.io/apimachinery/pkg/api/resource from k8s.io/api/autoscaling/v1+
k8s.io/apimachinery/pkg/api/safe from k8s.io/api/extensions/v1beta1
k8s.io/apimachinery/pkg/api/validate from k8s.io/api/extensions/v1beta1
k8s.io/apimachinery/pkg/api/validate/constraints from k8s.io/apimachinery/pkg/api/validate+
k8s.io/apimachinery/pkg/api/validate/content from k8s.io/apimachinery/pkg/api/validate
k8s.io/apimachinery/pkg/api/validation from k8s.io/apimachinery/pkg/util/managedfields/internal+
k8s.io/apimachinery/pkg/api/validation/path from k8s.io/apiserver/pkg/endpoints/request
💣 k8s.io/apimachinery/pkg/apis/meta/internalversion from k8s.io/apimachinery/pkg/apis/meta/internalversion/scheme+
k8s.io/apimachinery/pkg/apis/meta/internalversion/scheme from k8s.io/client-go/metadata+
k8s.io/apimachinery/pkg/apis/meta/internalversion/validation from k8s.io/client-go/util/watchlist
💣 k8s.io/apimachinery/pkg/apis/meta/v1 from k8s.io/api/admission/v1+
k8s.io/apimachinery/pkg/apis/meta/v1/unstructured from k8s.io/apimachinery/pkg/runtime/serializer/versioning+
k8s.io/apimachinery/pkg/apis/meta/v1/validation from k8s.io/apimachinery/pkg/api/validation+
💣 k8s.io/apimachinery/pkg/apis/meta/v1beta1 from k8s.io/apimachinery/pkg/apis/meta/internalversion
💣 k8s.io/apimachinery/pkg/apis/meta/v1beta1 from k8s.io/apimachinery/pkg/apis/meta/internalversion+
k8s.io/apimachinery/pkg/conversion from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1+
k8s.io/apimachinery/pkg/conversion/queryparams from k8s.io/apimachinery/pkg/runtime+
k8s.io/apimachinery/pkg/fields from k8s.io/apimachinery/pkg/api/equality+
@@ -322,7 +320,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/apimachinery/pkg/selection from k8s.io/apimachinery/pkg/apis/meta/v1+
k8s.io/apimachinery/pkg/types from k8s.io/api/admission/v1+
k8s.io/apimachinery/pkg/util/cache from k8s.io/client-go/tools/cache
k8s.io/apimachinery/pkg/util/diff from k8s.io/client-go/tools/cache
k8s.io/apimachinery/pkg/util/diff from k8s.io/client-go/tools/cache+
k8s.io/apimachinery/pkg/util/dump from k8s.io/apimachinery/pkg/util/diff+
k8s.io/apimachinery/pkg/util/errors from k8s.io/apimachinery/pkg/api/meta+
k8s.io/apimachinery/pkg/util/framer from k8s.io/apimachinery/pkg/runtime/serializer/json+
@@ -385,7 +383,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/applyconfigurations/internal from k8s.io/client-go/applyconfigurations/admissionregistration/v1+
k8s.io/client-go/applyconfigurations/meta/v1 from k8s.io/client-go/applyconfigurations/admissionregistration/v1+
k8s.io/client-go/applyconfigurations/networking/v1 from k8s.io/client-go/kubernetes/typed/networking/v1
k8s.io/client-go/applyconfigurations/networking/v1alpha1 from k8s.io/client-go/kubernetes/typed/networking/v1alpha1
k8s.io/client-go/applyconfigurations/networking/v1beta1 from k8s.io/client-go/kubernetes/typed/networking/v1beta1
k8s.io/client-go/applyconfigurations/node/v1 from k8s.io/client-go/kubernetes/typed/node/v1
k8s.io/client-go/applyconfigurations/node/v1alpha1 from k8s.io/client-go/kubernetes/typed/node/v1alpha1
@@ -395,8 +392,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/applyconfigurations/rbac/v1 from k8s.io/client-go/kubernetes/typed/rbac/v1
k8s.io/client-go/applyconfigurations/rbac/v1alpha1 from k8s.io/client-go/kubernetes/typed/rbac/v1alpha1
k8s.io/client-go/applyconfigurations/rbac/v1beta1 from k8s.io/client-go/kubernetes/typed/rbac/v1beta1
k8s.io/client-go/applyconfigurations/resource/v1 from k8s.io/client-go/kubernetes/typed/resource/v1
k8s.io/client-go/applyconfigurations/resource/v1alpha3 from k8s.io/client-go/kubernetes/typed/resource/v1alpha3
k8s.io/client-go/applyconfigurations/resource/v1beta1 from k8s.io/client-go/kubernetes/typed/resource/v1beta1
k8s.io/client-go/applyconfigurations/resource/v1beta2 from k8s.io/client-go/kubernetes/typed/resource/v1beta2
k8s.io/client-go/applyconfigurations/scheduling/v1 from k8s.io/client-go/kubernetes/typed/scheduling/v1
k8s.io/client-go/applyconfigurations/scheduling/v1alpha1 from k8s.io/client-go/kubernetes/typed/scheduling/v1alpha1
k8s.io/client-go/applyconfigurations/scheduling/v1beta1 from k8s.io/client-go/kubernetes/typed/scheduling/v1beta1
@@ -453,7 +452,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/informers/internalinterfaces from k8s.io/client-go/informers+
k8s.io/client-go/informers/networking from k8s.io/client-go/informers
k8s.io/client-go/informers/networking/v1 from k8s.io/client-go/informers/networking
k8s.io/client-go/informers/networking/v1alpha1 from k8s.io/client-go/informers/networking
k8s.io/client-go/informers/networking/v1beta1 from k8s.io/client-go/informers/networking
k8s.io/client-go/informers/node from k8s.io/client-go/informers
k8s.io/client-go/informers/node/v1 from k8s.io/client-go/informers/node
@@ -467,8 +465,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/informers/rbac/v1alpha1 from k8s.io/client-go/informers/rbac
k8s.io/client-go/informers/rbac/v1beta1 from k8s.io/client-go/informers/rbac
k8s.io/client-go/informers/resource from k8s.io/client-go/informers
k8s.io/client-go/informers/resource/v1 from k8s.io/client-go/informers/resource
k8s.io/client-go/informers/resource/v1alpha3 from k8s.io/client-go/informers/resource
k8s.io/client-go/informers/resource/v1beta1 from k8s.io/client-go/informers/resource
k8s.io/client-go/informers/resource/v1beta2 from k8s.io/client-go/informers/resource
k8s.io/client-go/informers/scheduling from k8s.io/client-go/informers
k8s.io/client-go/informers/scheduling/v1 from k8s.io/client-go/informers/scheduling
k8s.io/client-go/informers/scheduling/v1alpha1 from k8s.io/client-go/informers/scheduling
@@ -503,8 +503,8 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/kubernetes/typed/certificates/v1alpha1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/certificates/v1beta1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/coordination/v1 from k8s.io/client-go/kubernetes+
k8s.io/client-go/kubernetes/typed/coordination/v1alpha2 from k8s.io/client-go/kubernetes+
k8s.io/client-go/kubernetes/typed/coordination/v1beta1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/coordination/v1alpha2 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/coordination/v1beta1 from k8s.io/client-go/kubernetes+
k8s.io/client-go/kubernetes/typed/core/v1 from k8s.io/client-go/kubernetes+
k8s.io/client-go/kubernetes/typed/discovery/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/discovery/v1beta1 from k8s.io/client-go/kubernetes
@@ -516,7 +516,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta2 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/flowcontrol/v1beta3 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/networking/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/networking/v1alpha1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/networking/v1beta1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/node/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/node/v1alpha1 from k8s.io/client-go/kubernetes
@@ -526,8 +525,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/kubernetes/typed/rbac/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/rbac/v1alpha1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/rbac/v1beta1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/resource/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/resource/v1alpha3 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/resource/v1beta1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/resource/v1beta2 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/scheduling/v1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/scheduling/v1alpha1 from k8s.io/client-go/kubernetes
k8s.io/client-go/kubernetes/typed/scheduling/v1beta1 from k8s.io/client-go/kubernetes
@@ -566,7 +567,6 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/listers/flowcontrol/v1beta2 from k8s.io/client-go/informers/flowcontrol/v1beta2
k8s.io/client-go/listers/flowcontrol/v1beta3 from k8s.io/client-go/informers/flowcontrol/v1beta3
k8s.io/client-go/listers/networking/v1 from k8s.io/client-go/informers/networking/v1
k8s.io/client-go/listers/networking/v1alpha1 from k8s.io/client-go/informers/networking/v1alpha1
k8s.io/client-go/listers/networking/v1beta1 from k8s.io/client-go/informers/networking/v1beta1
k8s.io/client-go/listers/node/v1 from k8s.io/client-go/informers/node/v1
k8s.io/client-go/listers/node/v1alpha1 from k8s.io/client-go/informers/node/v1alpha1
@@ -576,8 +576,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/listers/rbac/v1 from k8s.io/client-go/informers/rbac/v1
k8s.io/client-go/listers/rbac/v1alpha1 from k8s.io/client-go/informers/rbac/v1alpha1
k8s.io/client-go/listers/rbac/v1beta1 from k8s.io/client-go/informers/rbac/v1beta1
k8s.io/client-go/listers/resource/v1 from k8s.io/client-go/informers/resource/v1
k8s.io/client-go/listers/resource/v1alpha3 from k8s.io/client-go/informers/resource/v1alpha3
k8s.io/client-go/listers/resource/v1beta1 from k8s.io/client-go/informers/resource/v1beta1
k8s.io/client-go/listers/resource/v1beta2 from k8s.io/client-go/informers/resource/v1beta2
k8s.io/client-go/listers/scheduling/v1 from k8s.io/client-go/informers/scheduling/v1
k8s.io/client-go/listers/scheduling/v1alpha1 from k8s.io/client-go/informers/scheduling/v1alpha1
k8s.io/client-go/listers/scheduling/v1beta1 from k8s.io/client-go/informers/scheduling/v1beta1
@@ -616,19 +618,18 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/client-go/util/apply from k8s.io/client-go/dynamic+
k8s.io/client-go/util/cert from k8s.io/client-go/rest+
k8s.io/client-go/util/connrotation from k8s.io/client-go/plugin/pkg/client/auth/exec+
k8s.io/client-go/util/consistencydetector from k8s.io/client-go/dynamic+
k8s.io/client-go/util/consistencydetector from k8s.io/client-go/tools/cache
k8s.io/client-go/util/flowcontrol from k8s.io/client-go/kubernetes+
k8s.io/client-go/util/homedir from k8s.io/client-go/tools/clientcmd
k8s.io/client-go/util/keyutil from k8s.io/client-go/util/cert
k8s.io/client-go/util/watchlist from k8s.io/client-go/dynamic+
k8s.io/client-go/util/workqueue from k8s.io/client-go/transport+
k8s.io/component-base/cli/flag from k8s.io/component-base/featuregate
k8s.io/component-base/featuregate from k8s.io/apiserver/pkg/features+
k8s.io/component-base/metrics from k8s.io/component-base/metrics/legacyregistry+
k8s.io/component-base/metrics/legacyregistry from k8s.io/component-base/metrics/prometheus/feature
k8s.io/component-base/metrics/prometheus/feature from k8s.io/component-base/featuregate
k8s.io/component-base/metrics/prometheusextension from k8s.io/component-base/metrics
k8s.io/component-base/version from k8s.io/component-base/featuregate+
k8s.io/component-base/zpages/features from k8s.io/apiserver/pkg/features
k8s.io/klog/v2 from k8s.io/apimachinery/pkg/api/meta+
k8s.io/klog/v2/internal/buffer from k8s.io/klog/v2
k8s.io/klog/v2/internal/clock from k8s.io/klog/v2
@@ -647,12 +648,10 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
k8s.io/kube-openapi/pkg/validation/spec from k8s.io/apimachinery/pkg/util/managedfields+
k8s.io/utils/buffer from k8s.io/client-go/tools/cache
k8s.io/utils/clock from k8s.io/apimachinery/pkg/util/cache+
k8s.io/utils/clock/testing from k8s.io/client-go/util/flowcontrol
k8s.io/utils/internal/third_party/forked/golang/golang-lru from k8s.io/utils/lru
k8s.io/utils/internal/third_party/forked/golang/net from k8s.io/utils/net
k8s.io/utils/lru from k8s.io/client-go/tools/record
k8s.io/utils/net from k8s.io/apimachinery/pkg/util/net+
k8s.io/utils/pointer from k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1+
k8s.io/utils/ptr from k8s.io/client-go/tools/cache+
k8s.io/utils/trace from k8s.io/client-go/tools/cache
sigs.k8s.io/controller-runtime/pkg/builder from tailscale.com/cmd/k8s-operator
@@ -696,13 +695,14 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
sigs.k8s.io/controller-runtime/pkg/webhook/internal/metrics from sigs.k8s.io/controller-runtime/pkg/webhook+
sigs.k8s.io/json from k8s.io/apimachinery/pkg/runtime/serializer/json+
sigs.k8s.io/json/internal/golang/encoding/json from sigs.k8s.io/json
💣 sigs.k8s.io/structured-merge-diff/v4/fieldpath from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v4/merge from k8s.io/apimachinery/pkg/util/managedfields/internal
sigs.k8s.io/structured-merge-diff/v4/schema from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v4/typed from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v4/value from k8s.io/apimachinery/pkg/runtime+
💣 sigs.k8s.io/randfill from k8s.io/apimachinery/pkg/apis/meta/v1+
sigs.k8s.io/randfill/bytesource from sigs.k8s.io/randfill
💣 sigs.k8s.io/structured-merge-diff/v6/fieldpath from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v6/merge from k8s.io/apimachinery/pkg/util/managedfields/internal
sigs.k8s.io/structured-merge-diff/v6/schema from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v6/typed from k8s.io/apimachinery/pkg/util/managedfields+
sigs.k8s.io/structured-merge-diff/v6/value from k8s.io/apimachinery/pkg/runtime+
sigs.k8s.io/yaml from k8s.io/apimachinery/pkg/runtime/serializer/json+
sigs.k8s.io/yaml/goyaml.v2 from sigs.k8s.io/yaml+
tailscale.com from tailscale.com/version
tailscale.com/appc from tailscale.com/ipn/ipnlocal
💣 tailscale.com/atomicfile from tailscale.com/ipn+
@@ -1152,7 +1152,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
math from compress/flate+
math/big from crypto/dsa+
math/bits from compress/flate+
math/rand from github.com/google/go-cmp/cmp+
math/rand from github.com/fxamacker/cbor/v2+
math/rand/v2 from crypto/ecdsa+
mime from github.com/prometheus/common/expfmt+
mime/multipart from github.com/go-openapi/swag+
@@ -1191,7 +1191,7 @@ tailscale.com/cmd/k8s-operator dependencies: (generated by github.com/tailscale/
sync/atomic from context+
syscall from crypto/internal/sysrand+
text/tabwriter from k8s.io/apimachinery/pkg/util/diff+
text/template from html/template+
text/template from html/template
text/template/parse from html/template+
time from compress/gzip+
unicode from bytes+
@@ -431,7 +431,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -446,7 +445,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -603,7 +601,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -618,7 +615,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -703,8 +699,8 @@ spec:
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
compute a sum by iterating through the elements of this field and subtracting
"weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
type: array
items:
@@ -776,7 +772,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -791,7 +786,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -948,7 +942,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -963,7 +956,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -1482,7 +1474,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -1823,7 +1815,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -2238,7 +2230,6 @@ spec:
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
nodeTaintsPolicy:
description: |-
@@ -2249,7 +2240,6 @@ spec:
- Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
topologyKey:
description: |-
@@ -380,7 +380,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -395,7 +394,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -552,7 +550,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -567,7 +564,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -652,8 +648,8 @@ spec:
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
compute a sum by iterating through the elements of this field and subtracting
"weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
type: array
items:
@@ -725,7 +721,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -740,7 +735,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -897,7 +891,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -912,7 +905,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
type: array
items:
type: string
@@ -1057,7 +1049,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -992,7 +992,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1007,7 +1006,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1168,7 +1166,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1183,7 +1180,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1272,8 +1268,8 @@ spec:
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
compute a sum by iterating through the elements of this field and subtracting
"weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
@@ -1337,7 +1333,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1352,7 +1347,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1513,7 +1507,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -1528,7 +1521,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -2051,7 +2043,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -2392,7 +2384,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -2803,7 +2795,6 @@ spec:
- Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations.
If this value is nil, the behavior is equivalent to the Honor policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
nodeTaintsPolicy:
description: |-
@@ -2814,7 +2805,6 @@ spec:
- Ignore: node taints are ignored. All nodes are included.
If this value is nil, the behavior is equivalent to the Ignore policy.
This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag.
type: string
topologyKey:
description: |-
@@ -3648,7 +3638,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -3663,7 +3652,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -3824,7 +3812,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -3839,7 +3826,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -3928,8 +3914,8 @@ spec:
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
compute a sum by iterating through the elements of this field and subtracting
"weight" from the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
items:
description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)
@@ -3993,7 +3979,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -4008,7 +3993,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -4169,7 +4153,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both matchLabelKeys and labelSelector.
Also, matchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -4184,7 +4167,6 @@ spec:
pod labels will be ignored. The default value is empty.
The same key is forbidden to exist in both mismatchLabelKeys and labelSelector.
Also, mismatchLabelKeys cannot be set when labelSelector isn't set.
This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default).
items:
type: string
type: array
@@ -4333,7 +4315,7 @@ spec:
Claims lists the names of resources, defined in spec.resourceClaims,
that are used by this container.
This is an alpha field and requires enabling the
This field depends on the
DynamicResourceAllocation feature gate.
This field is immutable. It can only be set for containers.
@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDCTCCAfGgAwIBAgIIJOLbes8sTr4wDQYJKoZIhvcNAQELBQAwIDEeMBwGA1UE
AxMVbWluaWNhIHJvb3QgY2EgMjRlMmRiMCAXDTE3MTIwNjE5NDIxMFoYDzIxMTcx
MjA2MTk0MjEwWjAgMR4wHAYDVQQDExVtaW5pY2Egcm9vdCBjYSAyNGUyZGIwggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC5WgZNoVJandj43kkLyU50vzCZ
alozvdRo3OFiKoDtmqKPNWRNO2hC9AUNxTDJco51Yc42u/WV3fPbbhSznTiOOVtn
Ajm6iq4I5nZYltGGZetGDOQWr78y2gWY+SG078MuOO2hyDIiKtVc3xiXYA+8Hluu
9F8KbqSS1h55yxZ9b87eKR+B0zu2ahzBCIHKmKWgc6N13l7aDxxY3D6uq8gtJRU0
toumyLbdzGcupVvjbjDP11nl07RESDWBLG1/g3ktJvqIa4BWgU2HMh4rND6y8OD3
Hy3H8MY6CElL+MOCbFJjWqhtOxeFyZZV9q3kYnk9CAuQJKMEGuN4GU6tzhW1AgMB
AAGjRTBDMA4GA1UdDwEB/wQEAwIChDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYB
BQUHAwIwEgYDVR0TAQH/BAgwBgEB/wIBADANBgkqhkiG9w0BAQsFAAOCAQEAF85v
d40HK1ouDAtWeO1PbnWfGEmC5Xa478s9ddOd9Clvp2McYzNlAFfM7kdcj6xeiNhF
WPIfaGAi/QdURSL/6C1KsVDqlFBlTs9zYfh2g0UXGvJtj1maeih7zxFLvet+fqll
xseM4P9EVJaQxwuK/F78YBt0tCNfivC6JNZMgxKF59h0FBpH70ytUSHXdz7FKwix
Mfn3qEb9BXSk0Q3prNV5sOV3vgjEtB4THfDxSz9z3+DepVnW3vbbqwEbkXdk3j82
2muVldgOUgTwK8eT+XdofVdntzU/kzygSAtAQwLJfn51fS1GvEcYGBc1bDryIqmF
p9BI7gVKtWSZYegicA==
-----END CERTIFICATE-----
+28
View File
@@ -0,0 +1,28 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
// Package e2e runs end-to-end tests for the Tailscale Kubernetes operator.
//
// To run without arguments, it requires:
//
// * Kubernetes cluster with local kubeconfig for it (direct connection, no API server proxy)
// * Tailscale operator installed with --set apiServerProxyConfig.mode="true"
// * ACLs from acl.hujson
// * OAuth client secret in TS_API_CLIENT_SECRET env, with at least auth_keys write scope and tag:k8s tag
// * Default ProxyClass and operator env vars as appropriate to set the desired default proxy images.
//
// It also supports running against devcontrol, using the --devcontrol flag,
// which it expects to reach at http://localhost:31544. Use --cluster to create
// a dedicated kind cluster for the tests, and --build to build and test the
// operator and proxy images for the current checkout.
//
// To run with minimal dependencies, use:
//
// go test -count=1 -v ./cmd/k8s-operator/e2e/ --build --cluster --devcontrol --skip-cleanup
//
// Running like this, it requires:
//
// * go
// * container runtime with the docker daemon API available
// * devcontrol: ./tool/go run ./cmd/devcontrol --generate-test-devices=k8s-operator-e2e --scenario-output-dir=/tmp/k8s-operator-e2e --test-dns=http://localhost:8055
package e2e
+7 -15
View File
@@ -14,8 +14,6 @@ import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
kube "tailscale.com/k8s-operator"
"tailscale.com/tstest"
"tailscale.com/types/ptr"
@@ -24,17 +22,12 @@ import (
// See [TestMain] for test requirements.
func TestIngress(t *testing.T) {
if apiClient == nil {
t.Skip("TestIngress requires TS_API_CLIENT_SECRET set")
if tnClient == nil {
t.Skip("TestIngress requires a working tailnet client")
}
cfg := config.GetConfigOrDie()
cl, err := client.New(cfg, client.Options{})
if err != nil {
t.Fatal(err)
}
// Apply nginx
createAndCleanup(t, cl,
createAndCleanup(t, kubeClient,
&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "nginx",
@@ -73,8 +66,7 @@ func TestIngress(t *testing.T) {
Name: "test-ingress",
Namespace: "default",
Annotations: map[string]string{
"tailscale.com/expose": "true",
"tailscale.com/proxy-class": "prod",
"tailscale.com/expose": "true",
},
},
Spec: corev1.ServiceSpec{
@@ -90,12 +82,12 @@ func TestIngress(t *testing.T) {
},
},
}
createAndCleanup(t, cl, svc)
createAndCleanup(t, kubeClient, svc)
// TODO: instead of timing out only when test times out, cancel context after 60s or so.
if err := wait.PollUntilContextCancel(t.Context(), time.Millisecond*100, true, func(ctx context.Context) (done bool, err error) {
maybeReadySvc := &corev1.Service{ObjectMeta: objectMeta("default", "test-ingress")}
if err := get(ctx, cl, maybeReadySvc); err != nil {
if err := get(ctx, kubeClient, maybeReadySvc); err != nil {
return false, err
}
isReady := kube.SvcIsReady(maybeReadySvc)
@@ -118,7 +110,7 @@ func TestIngress(t *testing.T) {
}
ctx, cancel := context.WithTimeout(t.Context(), time.Second)
defer cancel()
resp, err = tailnetClient.HTTPClient().Do(req.WithContext(ctx))
resp, err = tnClient.HTTPClient().Do(req.WithContext(ctx))
return err
}); err != nil {
t.Fatalf("error trying to reach Service: %v", err)
+6 -68
View File
@@ -5,34 +5,22 @@ package e2e
import (
"context"
"errors"
"flag"
"log"
"os"
"strings"
"testing"
"time"
"golang.org/x/oauth2/clientcredentials"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"tailscale.com/internal/client/tailscale"
"tailscale.com/ipn/store/mem"
"tailscale.com/tsnet"
)
// This test suite is currently not run in CI.
// It requires some setup not handled by this code:
// - Kubernetes cluster with local kubeconfig for it (direct connection, no API server proxy)
// - Tailscale operator installed with --set apiServerProxyConfig.mode="true"
// - ACLs from acl.hujson
// - OAuth client secret in TS_API_CLIENT_SECRET env, with at least auth_keys write scope and tag:k8s tag
var (
apiClient *tailscale.Client // For API calls to control.
tailnetClient *tsnet.Server // For testing real tailnet traffic.
)
func TestMain(m *testing.M) {
flag.Parse()
if !*fDevcontrol && os.Getenv("TS_API_CLIENT_SECRET") == "" {
log.Printf("Skipping setup: devcontrol is false and TS_API_CLIENT_SECRET is not set")
os.Exit(m.Run())
}
code, err := runTests(m)
if err != nil {
log.Printf("Error: %v", err)
@@ -41,56 +29,6 @@ func TestMain(m *testing.M) {
os.Exit(code)
}
func runTests(m *testing.M) (int, error) {
secret := os.Getenv("TS_API_CLIENT_SECRET")
if secret != "" {
secretParts := strings.Split(secret, "-")
if len(secretParts) != 4 {
return 0, errors.New("TS_API_CLIENT_SECRET is not valid")
}
ctx := context.Background()
credentials := clientcredentials.Config{
ClientID: secretParts[2],
ClientSecret: secret,
TokenURL: "https://login.tailscale.com/api/v2/oauth/token",
Scopes: []string{"auth_keys"},
}
apiClient = tailscale.NewClient("-", nil)
apiClient.HTTPClient = credentials.Client(ctx)
caps := tailscale.KeyCapabilities{
Devices: tailscale.KeyDeviceCapabilities{
Create: tailscale.KeyDeviceCreateCapabilities{
Reusable: false,
Preauthorized: true,
Ephemeral: true,
Tags: []string{"tag:k8s"},
},
},
}
authKey, authKeyMeta, err := apiClient.CreateKeyWithExpiry(ctx, caps, 10*time.Minute)
if err != nil {
return 0, err
}
defer apiClient.DeleteKey(context.Background(), authKeyMeta.ID)
tailnetClient = &tsnet.Server{
Hostname: "test-proxy",
Ephemeral: true,
Store: &mem.Store{},
AuthKey: authKey,
}
_, err = tailnetClient.Up(ctx)
if err != nil {
return 0, err
}
defer tailnetClient.Close()
}
return m.Run(), nil
}
func objectMeta(namespace, name string) metav1.ObjectMeta {
return metav1.ObjectMeta{
Namespace: namespace,
+174
View File
@@ -0,0 +1,174 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"fmt"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"sigs.k8s.io/controller-runtime/pkg/client"
"tailscale.com/types/ptr"
)
func applyPebbleResources(ctx context.Context, cl client.Client) error {
owner := client.FieldOwner("k8s-test")
if err := cl.Patch(ctx, pebbleDeployment(pebbleTag), client.Apply, owner); err != nil {
return fmt.Errorf("failed to apply pebble Deployment: %w", err)
}
if err := cl.Patch(ctx, pebbleService(), client.Apply, owner); err != nil {
return fmt.Errorf("failed to apply pebble Service: %w", err)
}
if err := cl.Patch(ctx, tailscaleNamespace(), client.Apply, owner); err != nil {
return fmt.Errorf("failed to apply tailscale Namespace: %w", err)
}
if err := cl.Patch(ctx, pebbleExternalNameService(), client.Apply, owner); err != nil {
return fmt.Errorf("failed to apply pebble ExternalName Service: %w", err)
}
return nil
}
func pebbleDeployment(tag string) *appsv1.Deployment {
return &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
Kind: "Deployment",
APIVersion: "apps/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pebble",
Namespace: ns,
},
Spec: appsv1.DeploymentSpec{
Replicas: ptr.To[int32](1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "pebble",
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "pebble",
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "pebble",
Image: fmt.Sprintf("ghcr.io/letsencrypt/pebble:%s", tag),
ImagePullPolicy: corev1.PullIfNotPresent,
Args: []string{
"-dnsserver=localhost:8053",
"-strict",
},
Ports: []corev1.ContainerPort{
{
Name: "acme",
ContainerPort: 14000,
},
{
Name: "pebble-api",
ContainerPort: 15000,
},
},
Env: []corev1.EnvVar{
{
Name: "PEBBLE_VA_NOSLEEP",
Value: "1",
},
},
},
{
Name: "challtestsrv",
Image: fmt.Sprintf("ghcr.io/letsencrypt/pebble-challtestsrv:%s", tag),
ImagePullPolicy: corev1.PullIfNotPresent,
Args: []string{"-defaultIPv6="},
Ports: []corev1.ContainerPort{
{
Name: "mgmt-api",
ContainerPort: 8055,
},
},
},
},
},
},
},
}
}
func pebbleService() *corev1.Service {
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pebble",
Namespace: ns,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
"app": "pebble",
},
Ports: []corev1.ServicePort{
{
Name: "acme",
Port: 14000,
TargetPort: intstr.FromInt(14000),
},
{
Name: "pebble-api",
Port: 15000,
TargetPort: intstr.FromInt(15000),
},
{
Name: "mgmt-api",
Port: 8055,
TargetPort: intstr.FromInt(8055),
},
},
},
}
}
func tailscaleNamespace() *corev1.Namespace {
return &corev1.Namespace{
TypeMeta: metav1.TypeMeta{
Kind: "Namespace",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "tailscale",
},
}
}
// pebbleExternalNameService ensures the operator in the tailscale namespace
// can reach pebble on a DNS name (pebble) that matches its TLS cert.
func pebbleExternalNameService() *corev1.Service {
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "pebble",
Namespace: "tailscale",
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeExternalName,
Selector: map[string]string{
"app": "pebble",
},
ExternalName: "pebble.default.svc.cluster.local",
},
}
}
+21 -15
View File
@@ -4,8 +4,10 @@
package e2e
import (
"crypto/tls"
"encoding/json"
"fmt"
"net/http"
"testing"
"time"
@@ -14,25 +16,18 @@ import (
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
"tailscale.com/ipn"
"tailscale.com/tstest"
)
// See [TestMain] for test requirements.
func TestProxy(t *testing.T) {
if apiClient == nil {
t.Skip("TestIngress requires TS_API_CLIENT_SECRET set")
}
cfg := config.GetConfigOrDie()
cl, err := client.New(cfg, client.Options{})
if err != nil {
t.Fatal(err)
if tnClient == nil {
t.Skip("TestProxy requires a working tailnet client")
}
// Create role and role binding to allow a group we'll impersonate to do stuff.
createAndCleanup(t, cl, &rbacv1.Role{
createAndCleanup(t, kubeClient, &rbacv1.Role{
ObjectMeta: objectMeta("tailscale", "read-secrets"),
Rules: []rbacv1.PolicyRule{{
APIGroups: []string{""},
@@ -40,7 +35,7 @@ func TestProxy(t *testing.T) {
Resources: []string{"secrets"},
}},
})
createAndCleanup(t, cl, &rbacv1.RoleBinding{
createAndCleanup(t, kubeClient, &rbacv1.RoleBinding{
ObjectMeta: objectMeta("tailscale", "read-secrets"),
Subjects: []rbacv1.Subject{{
Kind: "Group",
@@ -56,16 +51,25 @@ func TestProxy(t *testing.T) {
operatorSecret := corev1.Secret{
ObjectMeta: objectMeta("tailscale", "operator"),
}
if err := get(t.Context(), cl, &operatorSecret); err != nil {
if err := get(t.Context(), kubeClient, &operatorSecret); err != nil {
t.Fatal(err)
}
// Join tailnet as a client of the API server proxy.
proxyCfg := &rest.Config{
Host: fmt.Sprintf("https://%s:443", hostNameFromOperatorSecret(t, operatorSecret)),
Dial: tailnetClient.Dial,
}
proxyCl, err := client.New(proxyCfg, client.Options{})
proxyCl, err := client.New(proxyCfg, client.Options{
HTTPClient: &http.Client{
Timeout: 10 * time.Second,
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: testCAs,
},
DialContext: tnClient.Dial,
},
},
})
if err != nil {
t.Fatal(err)
}
@@ -77,7 +81,9 @@ func TestProxy(t *testing.T) {
// Wait for up to a minute the first time we use the proxy, to give it time
// to provision the TLS certs.
if err := tstest.WaitFor(time.Minute, func() error {
return get(t.Context(), proxyCl, &allowedSecret)
err := get(t.Context(), proxyCl, &allowedSecret)
t.Logf("get Secret via proxy: %v", err)
return err
}); err != nil {
t.Fatal(err)
}
+680
View File
@@ -0,0 +1,680 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"crypto/rand"
"crypto/tls"
"crypto/x509"
_ "embed"
jsonv1 "encoding/json"
"flag"
"fmt"
"io"
"net/http"
"net/url"
"os"
"os/exec"
"os/signal"
"path/filepath"
"slices"
"strings"
"sync"
"syscall"
"testing"
"time"
"github.com/go-logr/zapr"
"github.com/google/go-containerregistry/pkg/name"
"github.com/google/go-containerregistry/pkg/v1/daemon"
"github.com/google/go-containerregistry/pkg/v1/tarball"
"go.uber.org/zap"
"golang.org/x/oauth2/clientcredentials"
"helm.sh/helm/v3/pkg/action"
"helm.sh/helm/v3/pkg/chart"
"helm.sh/helm/v3/pkg/chart/loader"
"helm.sh/helm/v3/pkg/cli"
"helm.sh/helm/v3/pkg/release"
"helm.sh/helm/v3/pkg/storage/driver"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/portforward"
"k8s.io/client-go/transport/spdy"
"sigs.k8s.io/controller-runtime/pkg/client"
klog "sigs.k8s.io/controller-runtime/pkg/log"
kzap "sigs.k8s.io/controller-runtime/pkg/log/zap"
"sigs.k8s.io/kind/pkg/cluster"
"sigs.k8s.io/kind/pkg/cluster/nodeutils"
"sigs.k8s.io/kind/pkg/cmd"
"tailscale.com/client/tailscale/v2"
"tailscale.com/ipn"
"tailscale.com/ipn/store/mem"
tsapi "tailscale.com/k8s-operator/apis/v1alpha1"
"tailscale.com/tsnet"
)
const (
pebbleTag = "2.8.0"
ns = "default"
tmp = "/tmp/k8s-operator-e2e"
kindClusterName = "k8s-operator-e2e"
)
var (
tsClient = &tailscale.Client{Tailnet: "-"} // For API calls to control.
tnClient *tsnet.Server // For testing real tailnet traffic.
kubeClient client.WithWatch // For k8s API calls.
//go:embed certs/pebble.minica.crt
pebbleMiniCACert []byte
// Either nil (system) or pebble CAs if pebble is deployed for devcontrol.
// pebble has a static "mini" CA that its ACME directory URL serves a cert
// from, and also dynamically generates a different CA for issuing certs.
testCAs *x509.CertPool
//go:embed acl.hujson
requiredACLs []byte
fDevcontrol = flag.Bool("devcontrol", false, "if true, connect to devcontrol at http://localhost:31544. Run devcontrol with "+`
./tool/go run ./cmd/devcontrol \
--generate-test-devices=k8s-operator-e2e \
--dir=/tmp/devcontrol \
--scenario-output-dir=/tmp/k8s-operator-e2e \
--test-dns=http://localhost:8055`)
fSkipCleanup = flag.Bool("skip-cleanup", false, "if true, do not delete the kind cluster (if created) or tmp dir on exit")
fCluster = flag.Bool("cluster", false, "if true, create or use a pre-existing kind cluster named k8s-operator-e2e; otherwise assume a usable cluster already exists in kubeconfig")
fBuild = flag.Bool("build", false, "if true, build and deploy the operator and container images from the current checkout; otherwise assume the operator is already set up")
)
func runTests(m *testing.M) (int, error) {
logger := kzap.NewRaw().Sugar()
klog.SetLogger(zapr.NewLogger(logger.Desugar()))
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGTERM, syscall.SIGINT)
defer cancel()
ossDir, err := gitRootDir()
if err != nil {
return 0, err
}
if err := os.MkdirAll(tmp, 0755); err != nil {
return 0, fmt.Errorf("failed to create temp dir: %w", err)
}
logger.Infof("temp dir: %q", tmp)
logger.Infof("oss dir: %q", ossDir)
var (
kubeconfig string
kindProvider *cluster.Provider
)
if *fCluster {
kubeconfig = filepath.Join(tmp, "kubeconfig")
kindProvider = cluster.NewProvider(
cluster.ProviderWithLogger(cmd.NewLogger()),
)
clusters, err := kindProvider.List()
if err != nil {
return 0, fmt.Errorf("failed to list kind clusters: %w", err)
}
if !slices.Contains(clusters, kindClusterName) {
if err := kindProvider.Create(kindClusterName,
cluster.CreateWithWaitForReady(5*time.Minute),
cluster.CreateWithKubeconfigPath(kubeconfig),
cluster.CreateWithNodeImage("kindest/node:v1.30.0"),
); err != nil {
return 0, fmt.Errorf("failed to create kind cluster: %w", err)
}
}
if !*fSkipCleanup {
defer kindProvider.Delete(kindClusterName, kubeconfig)
defer os.Remove(kubeconfig)
}
}
// Cluster client setup.
restCfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return 0, fmt.Errorf("error loading kubeconfig: %w", err)
}
kubeClient, err = client.NewWithWatch(restCfg, client.Options{Scheme: tsapi.GlobalScheme})
if err != nil {
return 0, fmt.Errorf("error creating Kubernetes client: %w", err)
}
var (
clusterLoginServer string // Login server from cluster Pod point of view.
clientID, clientSecret string // OAuth client for the operator to use.
caPaths []string // Extra CA cert file paths to add to images.
certsDir string = filepath.Join(tmp, "certs") // Directory containing extra CA certs to add to images.
)
if *fDevcontrol {
// Deploy pebble and get its certs.
if err := applyPebbleResources(ctx, kubeClient); err != nil {
return 0, fmt.Errorf("failed to apply pebble resources: %w", err)
}
pebblePod, err := waitForPodReady(ctx, logger, kubeClient, ns, client.MatchingLabels{"app": "pebble"})
if err != nil {
return 0, fmt.Errorf("pebble pod not ready: %w", err)
}
if err := forwardLocalPortToPod(ctx, logger, restCfg, ns, pebblePod, 15000); err != nil {
return 0, fmt.Errorf("failed to set up port forwarding to pebble: %w", err)
}
testCAs = x509.NewCertPool()
if ok := testCAs.AppendCertsFromPEM(pebbleMiniCACert); !ok {
return 0, fmt.Errorf("failed to parse pebble minica cert")
}
var pebbleCAChain []byte
for _, path := range []string{"/intermediates/0", "/roots/0"} {
pem, err := pebbleGet(ctx, 15000, path)
if err != nil {
return 0, err
}
pebbleCAChain = append(pebbleCAChain, pem...)
}
if ok := testCAs.AppendCertsFromPEM(pebbleCAChain); !ok {
return 0, fmt.Errorf("failed to parse pebble ca chain cert")
}
if err := os.MkdirAll(certsDir, 0755); err != nil {
return 0, fmt.Errorf("failed to create certs dir: %w", err)
}
pebbleCAChainPath := filepath.Join(certsDir, "pebble-ca-chain.crt")
if err := os.WriteFile(pebbleCAChainPath, pebbleCAChain, 0644); err != nil {
return 0, fmt.Errorf("failed to write pebble CA chain: %w", err)
}
pebbleMiniCACertPath := filepath.Join(certsDir, "pebble.minica.crt")
if err := os.WriteFile(pebbleMiniCACertPath, pebbleMiniCACert, 0644); err != nil {
return 0, fmt.Errorf("failed to write pebble minica: %w", err)
}
caPaths = []string{pebbleCAChainPath, pebbleMiniCACertPath}
if !*fSkipCleanup {
defer os.RemoveAll(certsDir)
}
// Set up network connectivity between cluster and devcontrol.
//
// For devcontrol -> pebble (DNS mgmt for ACME challenges):
// * Port forward from localhost port 8055 to in-cluster pebble port 8055.
//
// For Pods -> devcontrol (tailscale clients joining the tailnet):
// * Create ssh-server Deployment in cluster.
// * Create reverse ssh tunnel that goes from ssh-server port 31544 to localhost:31544.
if err := forwardLocalPortToPod(ctx, logger, restCfg, ns, pebblePod, 8055); err != nil {
return 0, fmt.Errorf("failed to set up port forwarding to pebble: %w", err)
}
privateKey, publicKey, err := readOrGenerateSSHKey(tmp)
if err != nil {
return 0, fmt.Errorf("failed to read or generate SSH key: %w", err)
}
if !*fSkipCleanup {
defer os.Remove(privateKeyPath)
}
sshServiceIP, err := connectClusterToDevcontrol(ctx, logger, kubeClient, restCfg, privateKey, publicKey)
if err != nil {
return 0, fmt.Errorf("failed to set up cluster->devcontrol connection: %w", err)
}
if !*fSkipCleanup {
defer func() {
if err := cleanupSSHResources(context.Background(), kubeClient); err != nil {
logger.Infof("failed to clean up ssh-server resources: %v", err)
}
}()
}
// Address cluster workloads can reach devcontrol at. Must be a private
// IP to make sure tailscale client code recognises it shouldn't try an
// https fallback. See [controlclient.NewNoiseClient] for details.
clusterLoginServer = fmt.Sprintf("http://%s:31544", sshServiceIP)
b, err := os.ReadFile(filepath.Join(tmp, "api-key.json"))
if err != nil {
return 0, fmt.Errorf("failed to read api-key.json: %w", err)
}
var apiKeyData struct {
APIKey string `json:"apiKey"`
}
if err := jsonv1.Unmarshal(b, &apiKeyData); err != nil {
return 0, fmt.Errorf("failed to parse api-key.json: %w", err)
}
if apiKeyData.APIKey == "" {
return 0, fmt.Errorf("api-key.json did not contain an API key")
}
// Finish setting up tsClient.
baseURL, err := url.Parse("http://localhost:31544")
if err != nil {
return 0, fmt.Errorf("parse url: %w", err)
}
tsClient.BaseURL = baseURL
tsClient.APIKey = apiKeyData.APIKey
tsClient.HTTP = &http.Client{}
// Set ACLs and create OAuth client.
if err := tsClient.PolicyFile().Set(ctx, string(requiredACLs), ""); err != nil {
return 0, fmt.Errorf("failed to set ACLs: %w", err)
}
logger.Infof("ACLs configured")
key, err := tsClient.Keys().CreateOAuthClient(ctx, tailscale.CreateOAuthClientRequest{
Scopes: []string{"auth_keys", "devices:core", "services"},
Tags: []string{"tag:k8s-operator"},
Description: "k8s-operator client for e2e tests",
})
if err != nil {
return 0, fmt.Errorf("failed to create OAuth client: %w", err)
}
clientID = key.ID
clientSecret = key.Key
} else {
clientSecret = os.Getenv("TS_API_CLIENT_SECRET")
if clientSecret == "" {
return 0, fmt.Errorf("must use --devcontrol or set TS_API_CLIENT_SECRET to an OAuth client suitable for the operator")
}
// Format is "tskey-client-<id>-<random>".
parts := strings.Split(clientSecret, "-")
if len(parts) != 4 {
return 0, fmt.Errorf("TS_API_CLIENT_SECRET is not valid")
}
clientID = parts[2]
credentials := clientcredentials.Config{
ClientID: clientID,
ClientSecret: clientSecret,
TokenURL: fmt.Sprintf("%s/api/v2/oauth/token", ipn.DefaultControlURL),
Scopes: []string{"auth_keys"},
}
baseURL, _ := url.Parse(ipn.DefaultControlURL)
tsClient = &tailscale.Client{
Tailnet: "-",
HTTP: credentials.Client(ctx),
BaseURL: baseURL,
}
}
var ossTag string
if *fBuild {
// TODO(tomhjp): proper support for --build=false and layering pebble certs on top of existing images.
// TODO(tomhjp): support non-local platform.
// TODO(tomhjp): build tsrecorder as well.
// Build tailscale/k8s-operator, tailscale/tailscale, tailscale/k8s-proxy, with pebble CAs added.
ossTag, err = tagForRepo(ossDir)
if err != nil {
return 0, err
}
logger.Infof("using OSS image tag: %q", ossTag)
ossImageToTarget := map[string]string{
"local/k8s-operator": "publishdevoperator",
"local/tailscale": "publishdevimage",
"local/k8s-proxy": "publishdevproxy",
}
for img, target := range ossImageToTarget {
if err := buildImage(ctx, ossDir, img, target, ossTag, caPaths); err != nil {
return 0, err
}
nodes, err := kindProvider.ListInternalNodes(kindClusterName)
if err != nil {
return 0, fmt.Errorf("failed to list kind nodes: %w", err)
}
// TODO(tomhjp): can be made more efficient and portable if we
// stream built image tarballs straight to the node rather than
// going via the daemon.
// TODO(tomhjp): support --build with non-kind clusters.
imgRef, err := name.ParseReference(fmt.Sprintf("%s:%s", img, ossTag))
if err != nil {
return 0, fmt.Errorf("failed to parse image reference: %w", err)
}
img, err := daemon.Image(imgRef)
if err != nil {
return 0, fmt.Errorf("failed to get image from daemon: %w", err)
}
pr, pw := io.Pipe()
go func() {
defer pw.Close()
if err := tarball.Write(imgRef, img, pw); err != nil {
logger.Infof("failed to write image to pipe: %v", err)
}
}()
for _, n := range nodes {
if err := nodeutils.LoadImageArchive(n, pr); err != nil {
return 0, fmt.Errorf("failed to load image into node %q: %w", n.String(), err)
}
}
}
}
// Generate CRDs for the helm chart.
cmd := exec.CommandContext(ctx, "go", "run", "tailscale.com/cmd/k8s-operator/generate", "helmcrd")
cmd.Dir = ossDir
out, err := cmd.CombinedOutput()
if err != nil {
return 0, fmt.Errorf("failed to generate CRD: %v: %s", err, out)
}
// Load and install helm chart.
chart, err := loader.Load(filepath.Join(ossDir, "cmd", "k8s-operator", "deploy", "chart"))
if err != nil {
return 0, fmt.Errorf("failed to load helm chart: %w", err)
}
values := map[string]any{
"loginServer": clusterLoginServer,
"oauth": map[string]any{
"clientId": clientID,
"clientSecret": clientSecret,
},
"apiServerProxyConfig": map[string]any{
"mode": "true",
},
"operatorConfig": map[string]any{
"logging": "debug",
"extraEnv": []map[string]any{
{
"name": "K8S_PROXY_IMAGE",
"value": "local/k8s-proxy:" + ossTag,
},
{
"name": "TS_DEBUG_ACME_DIRECTORY_URL",
"value": "https://pebble:14000/dir",
},
},
"image": map[string]any{
"repo": "local/k8s-operator",
"tag": ossTag,
"pullPolicy": "IfNotPresent",
},
},
"proxyConfig": map[string]any{
"defaultProxyClass": "default",
"image": map[string]any{
"repository": "local/tailscale",
"tag": ossTag,
},
},
}
settings := cli.New()
settings.KubeConfig = kubeconfig
settings.SetNamespace("tailscale")
helmCfg := &action.Configuration{}
if err := helmCfg.Init(settings.RESTClientGetter(), "tailscale", "", logger.Infof); err != nil {
return 0, fmt.Errorf("failed to initialize helm action configuration: %w", err)
}
const relName = "tailscale-operator" // TODO(tomhjp): maybe configurable if others use a different value.
f := upgraderOrInstaller(helmCfg, relName)
if _, err := f(ctx, relName, chart, values); err != nil {
return 0, fmt.Errorf("failed to install %q via helm: %w", relName, err)
}
if err := applyDefaultProxyClass(ctx, kubeClient); err != nil {
return 0, fmt.Errorf("failed to apply default ProxyClass: %w", err)
}
caps := tailscale.KeyCapabilities{}
caps.Devices.Create.Preauthorized = true
caps.Devices.Create.Ephemeral = true
caps.Devices.Create.Tags = []string{"tag:k8s"}
authKey, err := tsClient.Keys().CreateAuthKey(ctx, tailscale.CreateKeyRequest{
Capabilities: caps,
ExpirySeconds: 600,
Description: "e2e test authkey",
})
if err != nil {
return 0, err
}
defer tsClient.Keys().Delete(context.Background(), authKey.ID)
tnClient = &tsnet.Server{
ControlURL: tsClient.BaseURL.String(),
Hostname: "test-proxy",
Ephemeral: true,
Store: &mem.Store{},
AuthKey: authKey.Key,
}
_, err = tnClient.Up(ctx)
if err != nil {
return 0, err
}
defer tnClient.Close()
return m.Run(), nil
}
func upgraderOrInstaller(cfg *action.Configuration, releaseName string) helmInstallerFunc {
hist := action.NewHistory(cfg)
hist.Max = 1
helmVersions, err := hist.Run(releaseName)
if err == driver.ErrReleaseNotFound || (len(helmVersions) > 0 && helmVersions[0].Info.Status == release.StatusUninstalled) {
return helmInstaller(cfg, releaseName)
} else {
return helmUpgrader(cfg)
}
}
func helmUpgrader(cfg *action.Configuration) helmInstallerFunc {
upgrade := action.NewUpgrade(cfg)
upgrade.Namespace = "tailscale"
upgrade.Install = true
upgrade.Wait = true
upgrade.Timeout = 5 * time.Minute
return upgrade.RunWithContext
}
func helmInstaller(cfg *action.Configuration, releaseName string) helmInstallerFunc {
install := action.NewInstall(cfg)
install.Namespace = "tailscale"
install.CreateNamespace = true
install.ReleaseName = releaseName
install.Wait = true
install.Timeout = 5 * time.Minute
install.Replace = true
return func(ctx context.Context, _ string, chart *chart.Chart, values map[string]any) (*release.Release, error) {
return install.RunWithContext(ctx, chart, values)
}
}
type helmInstallerFunc func(context.Context, string, *chart.Chart, map[string]any) (*release.Release, error)
// gitRootDir returns the top-level directory of the current git repo. Expects
// to be run from inside a git repo.
func gitRootDir() (string, error) {
top, err := exec.Command("git", "rev-parse", "--show-toplevel").Output()
if err != nil {
return "", fmt.Errorf("failed to find git top level (not in corp git?): %w", err)
}
return strings.TrimSpace(string(top)), nil
}
func tagForRepo(dir string) (string, error) {
cmd := exec.Command("git", "rev-parse", "--short", "HEAD")
cmd.Dir = dir
out, err := cmd.Output()
if err != nil {
return "", fmt.Errorf("failed to get latest git tag for repo %q: %w", dir, err)
}
tag := strings.TrimSpace(string(out))
// If dirty, append an extra random tag to ensure unique image tags.
cmd = exec.Command("git", "status", "--porcelain")
cmd.Dir = dir
out, err = cmd.Output()
if err != nil {
return "", fmt.Errorf("failed to check git status for repo %q: %w", dir, err)
}
if strings.TrimSpace(string(out)) != "" {
tag += "-" + strings.ToLower(rand.Text())
}
return tag, nil
}
func applyDefaultProxyClass(ctx context.Context, cl client.Client) error {
pc := &tsapi.ProxyClass{
TypeMeta: metav1.TypeMeta{
APIVersion: tsapi.SchemeGroupVersion.String(),
Kind: tsapi.ProxyClassKind,
},
ObjectMeta: metav1.ObjectMeta{
Name: "default",
},
Spec: tsapi.ProxyClassSpec{
StatefulSet: &tsapi.StatefulSet{
Pod: &tsapi.Pod{
TailscaleInitContainer: &tsapi.Container{
ImagePullPolicy: "IfNotPresent",
},
TailscaleContainer: &tsapi.Container{
ImagePullPolicy: "IfNotPresent",
},
},
},
},
}
owner := client.FieldOwner("k8s-test")
if err := cl.Patch(ctx, pc, client.Apply, owner); err != nil {
return fmt.Errorf("failed to apply default ProxyClass: %w", err)
}
return nil
}
// forwardLocalPortToPod sets up port forwarding to the specified Pod and remote port.
// It runs until the provided ctx is done.
func forwardLocalPortToPod(ctx context.Context, logger *zap.SugaredLogger, cfg *rest.Config, ns, podName string, port int) error {
transport, upgrader, err := spdy.RoundTripperFor(cfg)
if err != nil {
return fmt.Errorf("failed to create round tripper: %w", err)
}
u, err := url.Parse(fmt.Sprintf("%s%s/api/v1/namespaces/%s/pods/%s/portforward", cfg.Host, cfg.APIPath, ns, podName))
if err != nil {
return fmt.Errorf("failed to parse URL: %w", err)
}
dialer := spdy.NewDialer(upgrader, &http.Client{Transport: transport}, "POST", u)
stopChan := make(chan struct{}, 1)
readyChan := make(chan struct{}, 1)
ports := []string{fmt.Sprintf("%d:%d", port, port)}
// TODO(tomhjp): work out how zap logger can be used instead of stdout/err.
pf, err := portforward.New(dialer, ports, stopChan, readyChan, os.Stdout, os.Stderr)
if err != nil {
return fmt.Errorf("failed to create port forwarder: %w", err)
}
go func() {
if err := pf.ForwardPorts(); err != nil {
logger.Infof("Port forwarding error: %v\n", err)
}
}()
var once sync.Once
go func() {
<-ctx.Done()
once.Do(func() { close(stopChan) })
}()
// Wait for port forwarding to be ready
select {
case <-readyChan:
logger.Infof("Port forwarding to Pod %s/%s ready", ns, podName)
case <-time.After(10 * time.Second):
once.Do(func() { close(stopChan) })
return fmt.Errorf("timeout waiting for port forward to be ready")
}
return nil
}
// waitForPodReady waits for at least 1 Pod matching the label selector to be
// in Ready state. It returns the name of the first ready Pod it finds.
func waitForPodReady(ctx context.Context, logger *zap.SugaredLogger, cl client.WithWatch, ns string, labelSelector client.MatchingLabels) (string, error) {
pods := &corev1.PodList{}
w, err := cl.Watch(ctx, pods, client.InNamespace(ns), client.MatchingLabels(labelSelector))
if err != nil {
return "", fmt.Errorf("failed to create pod watcher: %v", err)
}
defer w.Stop()
for {
select {
case event, ok := <-w.ResultChan():
if !ok {
return "", fmt.Errorf("watcher channel closed")
}
switch event.Type {
case watch.Added, watch.Modified:
if pod, ok := event.Object.(*corev1.Pod); ok {
for _, condition := range pod.Status.Conditions {
if condition.Type == corev1.PodReady && condition.Status == corev1.ConditionTrue {
logger.Infof("pod %s is ready", pod.Name)
return pod.Name, nil
}
}
}
case watch.Error:
return "", fmt.Errorf("watch error: %v", event.Object)
}
case <-ctx.Done():
return "", fmt.Errorf("timeout waiting for pod to be ready")
}
}
}
func pebbleGet(ctx context.Context, port uint16, path string) ([]byte, error) {
pebbleClient := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: testCAs,
},
},
Timeout: 10 * time.Second,
}
req, _ := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("https://localhost:%d%s", port, path), nil)
resp, err := pebbleClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to fetch pebble root CA: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("HTTP %d when fetching pebble root CA", resp.StatusCode)
}
b, err := io.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read pebble root CA response: %w", err)
}
return b, nil
}
func buildImage(ctx context.Context, dir, repo, target, tag string, extraCACerts []string) error {
var files []string
for _, f := range extraCACerts {
files = append(files, fmt.Sprintf("%s:/etc/ssl/certs/%s", f, filepath.Base(f)))
}
cmd := exec.CommandContext(ctx, "make", target,
"PLATFORM=local",
fmt.Sprintf("TAGS=%s", tag),
fmt.Sprintf("REPO=%s", repo),
fmt.Sprintf("FILES=%s", strings.Join(files, ",")),
)
cmd.Dir = dir
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to build image %q: %w", target, err)
}
return nil
}
+352
View File
@@ -0,0 +1,352 @@
// Copyright (c) Tailscale Inc & AUTHORS
// SPDX-License-Identifier: BSD-3-Clause
package e2e
import (
"context"
"crypto/ed25519"
"crypto/rand"
"encoding/hex"
"encoding/pem"
"fmt"
"io"
"net"
"os"
"path/filepath"
"time"
"go.uber.org/zap"
"golang.org/x/crypto/ssh"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/client-go/rest"
"sigs.k8s.io/controller-runtime/pkg/client"
tailscaleroot "tailscale.com"
"tailscale.com/types/ptr"
)
const (
keysFilePath = "/root/.ssh/authorized_keys"
sshdConfig = `
Port 8022
# Allow reverse tunnels
GatewayPorts yes
AllowTcpForwarding yes
# Auth
PermitRootLogin yes
PasswordAuthentication no
PubkeyAuthentication yes
AuthorizedKeysFile ` + keysFilePath
)
var privateKeyPath = filepath.Join(tmp, "id_ed25519")
func connectClusterToDevcontrol(ctx context.Context, logger *zap.SugaredLogger, cl client.WithWatch, restConfig *rest.Config, privKey ed25519.PrivateKey, pubKey []byte) (clusterIP string, _ error) {
logger.Info("Setting up SSH reverse tunnel from cluster to devcontrol...")
var err error
if clusterIP, err = applySSHResources(ctx, cl, tailscaleroot.AlpineDockerTag, pubKey); err != nil {
return "", fmt.Errorf("failed to apply ssh-server resources: %w", err)
}
sshPodName, err := waitForPodReady(ctx, logger, cl, ns, client.MatchingLabels{"app": "ssh-server"})
if err != nil {
return "", fmt.Errorf("ssh-server Pod not ready: %w", err)
}
if err := forwardLocalPortToPod(ctx, logger, restConfig, ns, sshPodName, 8022); err != nil {
return "", fmt.Errorf("failed to set up port forwarding to ssh-server: %w", err)
}
if err := reverseTunnel(ctx, logger, privKey, fmt.Sprintf("localhost:%d", 8022), 31544, "localhost:31544"); err != nil {
return "", fmt.Errorf("failed to set up reverse tunnel: %w", err)
}
return clusterIP, nil
}
func reverseTunnel(ctx context.Context, logger *zap.SugaredLogger, privateKey ed25519.PrivateKey, sshHost string, remotePort uint16, fwdTo string) error {
signer, err := ssh.NewSignerFromKey(privateKey)
if err != nil {
return fmt.Errorf("failed to create signer: %w", err)
}
config := &ssh.ClientConfig{
User: "root",
Auth: []ssh.AuthMethod{
ssh.PublicKeys(signer),
},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: 30 * time.Second,
}
conn, err := ssh.Dial("tcp", sshHost, config)
if err != nil {
return fmt.Errorf("failed to connect to SSH server: %w", err)
}
logger.Infof("Connected to SSH server at %s\n", sshHost)
go func() {
defer conn.Close()
// Start listening on remote port.
remoteAddr := fmt.Sprintf("localhost:%d", remotePort)
remoteLn, err := conn.Listen("tcp", remoteAddr)
if err != nil {
logger.Infof("Failed to listen on remote port %d: %v", remotePort, err)
return
}
defer remoteLn.Close()
logger.Infof("Reverse tunnel ready on remote addr %s -> local addr %s", remoteAddr, fwdTo)
for {
remoteConn, err := remoteLn.Accept()
if err != nil {
logger.Infof("Failed to accept remote connection: %v", err)
return
}
go handleConnection(ctx, logger, remoteConn, fwdTo)
}
}()
return nil
}
func handleConnection(ctx context.Context, logger *zap.SugaredLogger, remoteConn net.Conn, fwdTo string) {
go func() {
<-ctx.Done()
remoteConn.Close()
}()
var d net.Dialer
localConn, err := d.DialContext(ctx, "tcp", fwdTo)
if err != nil {
logger.Infof("Failed to connect to local service %s: %v", fwdTo, err)
return
}
go func() {
<-ctx.Done()
localConn.Close()
}()
go func() {
if _, err := io.Copy(localConn, remoteConn); err != nil {
logger.Infof("Error copying remote->local: %v", err)
}
}()
go func() {
if _, err := io.Copy(remoteConn, localConn); err != nil {
logger.Infof("Error copying local->remote: %v", err)
}
}()
}
func readOrGenerateSSHKey(tmp string) (ed25519.PrivateKey, []byte, error) {
var privateKey ed25519.PrivateKey
b, err := os.ReadFile(privateKeyPath)
switch {
case os.IsNotExist(err):
_, privateKey, err = ed25519.GenerateKey(rand.Reader)
if err != nil {
return nil, nil, fmt.Errorf("failed to generate key: %w", err)
}
privKeyPEM, err := ssh.MarshalPrivateKey(privateKey, "")
if err != nil {
return nil, nil, fmt.Errorf("failed to marshal SSH private key: %w", err)
}
f, err := os.OpenFile(privateKeyPath, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return nil, nil, fmt.Errorf("failed to open SSH private key file: %w", err)
}
defer f.Close()
if err := pem.Encode(f, privKeyPEM); err != nil {
return nil, nil, fmt.Errorf("failed to write SSH private key: %w", err)
}
case err != nil:
return nil, nil, fmt.Errorf("failed to read SSH private key: %w", err)
default:
pKey, err := ssh.ParseRawPrivateKey(b)
if err != nil {
return nil, nil, fmt.Errorf("failed to parse SSH private key: %w", err)
}
pKeyPointer, ok := pKey.(*ed25519.PrivateKey)
if !ok {
return nil, nil, fmt.Errorf("SSH private key is not ed25519: %T", pKey)
}
privateKey = *pKeyPointer
}
sshPublicKey, err := ssh.NewPublicKey(privateKey.Public())
if err != nil {
return nil, nil, fmt.Errorf("failed to create SSH public key: %w", err)
}
return privateKey, ssh.MarshalAuthorizedKey(sshPublicKey), nil
}
func applySSHResources(ctx context.Context, cl client.Client, alpineTag string, pubKey []byte) (string, error) {
owner := client.FieldOwner("k8s-test")
if err := cl.Patch(ctx, sshDeployment(alpineTag, pubKey), client.Apply, owner); err != nil {
return "", fmt.Errorf("failed to apply ssh-server Deployment: %w", err)
}
if err := cl.Patch(ctx, sshConfigMap(pubKey), client.Apply, owner); err != nil {
return "", fmt.Errorf("failed to apply ssh-server ConfigMap: %w", err)
}
svc := sshService()
if err := cl.Patch(ctx, svc, client.Apply, owner); err != nil {
return "", fmt.Errorf("failed to apply ssh-server Service: %w", err)
}
return svc.Spec.ClusterIP, nil
}
func cleanupSSHResources(ctx context.Context, cl client.Client) error {
noGrace := &client.DeleteOptions{
GracePeriodSeconds: ptr.To[int64](0),
}
if err := cl.Delete(ctx, sshDeployment("", nil), noGrace); err != nil {
return fmt.Errorf("failed to delete ssh-server Deployment: %w", err)
}
if err := cl.Delete(ctx, sshConfigMap(nil), noGrace); err != nil {
return fmt.Errorf("failed to delete ssh-server ConfigMap: %w", err)
}
if err := cl.Delete(ctx, sshService(), noGrace); err != nil {
return fmt.Errorf("failed to delete control Service: %w", err)
}
return nil
}
func sshDeployment(tag string, pubKey []byte) *appsv1.Deployment {
return &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
Kind: "Deployment",
APIVersion: "apps/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "ssh-server",
Namespace: ns,
},
Spec: appsv1.DeploymentSpec{
Replicas: ptr.To[int32](1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"app": "ssh-server",
},
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"app": "ssh-server",
},
Annotations: map[string]string{
"pubkey": hex.EncodeToString(pubKey), // Ensure new key triggers rollout.
},
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "ssh-server",
Image: fmt.Sprintf("alpine:%s", tag),
Command: []string{
"sh", "-c",
"apk add openssh-server; ssh-keygen -A; /usr/sbin/sshd -D -e",
},
Ports: []corev1.ContainerPort{
{
Name: "ctrl-port-fwd",
ContainerPort: 31544,
Protocol: corev1.ProtocolTCP,
},
{
Name: "ssh",
ContainerPort: 8022,
Protocol: corev1.ProtocolTCP,
},
},
ReadinessProbe: &corev1.Probe{
ProbeHandler: corev1.ProbeHandler{
TCPSocket: &corev1.TCPSocketAction{
Port: intstr.FromInt(8022),
},
},
InitialDelaySeconds: 1,
PeriodSeconds: 1,
},
VolumeMounts: []corev1.VolumeMount{
{
Name: "sshd-config",
MountPath: "/etc/ssh/sshd_config.d/reverse-tunnel.conf",
SubPath: "reverse-tunnel.conf",
},
{
Name: "sshd-config",
MountPath: keysFilePath,
SubPath: "authorized_keys",
},
},
},
},
Volumes: []corev1.Volume{
{
Name: "sshd-config",
VolumeSource: corev1.VolumeSource{
ConfigMap: &corev1.ConfigMapVolumeSource{
LocalObjectReference: corev1.LocalObjectReference{
Name: "ssh-server-config",
},
},
},
},
},
},
},
},
}
}
func sshConfigMap(pubKey []byte) *corev1.ConfigMap {
return &corev1.ConfigMap{
TypeMeta: metav1.TypeMeta{
Kind: "ConfigMap",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "ssh-server-config",
Namespace: ns,
},
Data: map[string]string{
"reverse-tunnel.conf": sshdConfig,
"authorized_keys": string(pubKey),
},
}
}
func sshService() *corev1.Service {
return &corev1.Service{
TypeMeta: metav1.TypeMeta{
Kind: "Service",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "control",
Namespace: ns,
},
Spec: corev1.ServiceSpec{
Type: corev1.ServiceTypeClusterIP,
Selector: map[string]string{
"app": "ssh-server",
},
Ports: []corev1.ServicePort{
{
Name: "tunnel",
Port: 31544,
Protocol: corev1.ProtocolTCP,
},
},
},
}
}
+1 -1
View File
@@ -65,12 +65,12 @@ tailscale.com/cmd/tailscale dependencies: (generated by github.com/tailscale/dep
github.com/tailscale/web-client-prebuilt from tailscale.com/client/web
github.com/toqueteos/webbrowser from tailscale.com/cmd/tailscale/cli+
github.com/x448/float16 from github.com/fxamacker/cbor/v2
go.yaml.in/yaml/v2 from sigs.k8s.io/yaml
💣 go4.org/mem from tailscale.com/client/local+
go4.org/netipx from tailscale.com/net/tsaddr
W 💣 golang.zx2c4.com/wireguard/windows/tunnel/winipcfg from tailscale.com/net/netmon+
k8s.io/client-go/util/homedir from tailscale.com/cmd/tailscale/cli
sigs.k8s.io/yaml from tailscale.com/cmd/tailscale/cli
sigs.k8s.io/yaml/goyaml.v2 from sigs.k8s.io/yaml
software.sslmate.com/src/go-pkcs12 from tailscale.com/cmd/tailscale/cli
software.sslmate.com/src/go-pkcs12/internal/rc2 from software.sslmate.com/src/go-pkcs12
tailscale.com from tailscale.com/version