Compare commits

..

102 commits

Author SHA1 Message Date
Renovate bot
1b3e57814a Update Helm release cilium to v1.17.2 2025-03-16 00:01:46 +00:00
a82d5b8d23 Remove these that no longer get deployed anyway.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-13 10:51:10 +00:00
c7d82f4502 Merge pull request 'Update Helm release nvidia-device-plugin to v0.17.1' (#30) from renovate/nvidia-device-plugin-0.x into main
Reviewed-on: #30
2025-03-13 10:45:24 +00:00
a5221d8389 Merge pull request 'Update 1password/connect-api Docker tag to v1.7.3' (#17) from renovate/1password-connect-api-1.x into main
Reviewed-on: #17
2025-03-13 10:43:46 +00:00
f9e1f2fef8 Merge pull request 'Update 1password/connect-sync Docker tag to v1.7.3' (#18) from renovate/1password-connect-sync-1.x into main
Reviewed-on: #18
2025-03-13 10:42:55 +00:00
832299c24a Merge branch 'main' into renovate/1password-connect-api-1.x 2025-03-13 10:41:47 +00:00
ced8b183d3 Merge pull request 'Update Helm release secrets-store-csi-driver to v1.4.8' (#31) from renovate/secrets-store-csi-driver-1.x into main
Reviewed-on: #31
2025-03-13 10:38:06 +00:00
8c9baf2e97 Merge pull request 'Update Helm release external-dns to v1.15.2' (#28) from renovate/external-dns-1.x into main
Reviewed-on: #28
2025-03-13 10:32:38 +00:00
Renovate bot
c1990ed111 Update Helm release secrets-store-csi-driver to v1.4.8 2025-03-13 00:01:32 +00:00
Renovate bot
012a592fc5 Update Helm release nvidia-device-plugin to v0.17.1 2025-03-13 00:01:29 +00:00
Renovate bot
71da7530d7 Update Helm release external-dns to v1.15.2 2025-03-12 00:01:30 +00:00
ac77a59260 f it, this works
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 17:16:44 +00:00
dfaf780995 Move to kustomize+helm because omg that is one annoying chart
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 17:12:40 +00:00
7197ab2ae9 allow drop privs lol
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:37:40 +00:00
03d60de00c allow drop privs lol
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:36:51 +00:00
896c5cda02 rootless 80 another way
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:33:30 +00:00
90ca469ba4 rootless 80
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:28:30 +00:00
3fc6869c51 seems to be listening on 80 as well...
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:25:15 +00:00
1d56b4636c TargetPort and move port to object
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:21:17 +00:00
4831ec4212 try the other way
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:17:09 +00:00
cb118ff57a port 80 and alias
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 16:13:02 +00:00
ffa75201df Merge pull request 'Update Helm release paperless-ngx to v0.24.1' (#27) from renovate/paperless-ngx-0.x into main
Reviewed-on: #27
2025-03-11 14:53:34 +00:00
2ed9b42b5d Merge pull request 'Update Helm release node-red to v5.4.0' (#26) from renovate/node-red-5.x into main
Reviewed-on: #26
2025-03-11 14:51:39 +00:00
1bf3f22d61 now we get the latest via the helm chart, why not
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 14:48:07 +00:00
eeab597186 Image switch too
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 14:45:23 +00:00
6486a1148b Switch from bitnami to official chart
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 14:34:02 +00:00
8e16be8a29 Well... will this work?
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:35:29 +00:00
5a9c412272 Oh great, this is now completely broken by bitnami, we can _try_ oci
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:32:13 +00:00
5aa80a4159 wut, no...
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:27:06 +00:00
392f2491cd Renovate got confused I think!
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:24:16 +00:00
832acba609 Merge branch 'renovate/external-dns-8.x'
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:20:46 +00:00
e34aa444e5 Merge branch 'main' into renovate/external-dns-8.x 2025-03-11 13:19:50 +00:00
5cfb35e557 pin the container version for the helm upgrade
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-11 13:18:01 +00:00
7252c5d113 Merge pull request 'Update eclipse-mosquitto Docker tag to v1.6.15' (#19) from renovate/eclipse-mosquitto-1.x into main
Reviewed-on: #19
Reviewed-by: Martyn <m@rtyn.berlin>
2025-03-11 09:59:17 +00:00
Renovate bot
beb22c8bc6 Update Helm release external-dns to v8 2025-03-11 00:01:26 +00:00
ed320ab060 move back for now
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 21:13:23 +00:00
6162a7ef85 Welcome to my block list
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 09:38:01 +00:00
700194f3bb nginx specific regex, because of course
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 09:26:12 +00:00
162fb3f013 Alexa, die in a fire.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 09:14:42 +00:00
dbf06a7347 Alexa, die in a fire.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 09:11:44 +00:00
86ce1aa807 Alexa, die in a fire.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-08 08:59:02 +00:00
6dab365801 New container, new path. irritating. Oh well, cleanup too.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-07 13:07:48 +00:00
1610d11112 If you won't play nice, don't play.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:32:11 +00:00
d4e4e4354d YAML was a mistake
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:29:40 +00:00
d52f8c147b not a list huh
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:28:35 +00:00
6019477cda nginx ftw
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:26:36 +00:00
41ae6765f4 Add nginx hosting my smb bucket
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:24:35 +00:00
9605f2baeb Move out of the way please
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 18:24:09 +00:00
261c7ccd4d OOps copypasta
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 17:55:40 +00:00
838c4acc41 Allow mounting the s3 storage via smb
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 17:50:16 +00:00
4c6a97e52a Prep for having yet another s3
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-06 17:05:12 +00:00
Renovate bot
4efb41b584 Update Helm release paperless-ngx to v0.24.1 2025-03-06 00:02:23 +00:00
Renovate bot
c95ee67191 Update Helm release node-red to v5.4.0 2025-03-06 00:02:21 +00:00
dd6202a26f Bump some versions
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 18:09:11 +00:00
a70b28c905 Let it go
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 17:39:55 +00:00
2b39418104 Oops wrong directory
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 17:39:10 +00:00
83b0b4be59 Expose so I can use if needed in-house
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 17:38:08 +00:00
13a005f6b5 Calm argo tf down
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 17:36:37 +00:00
440682a8ff Add the tailscale proxy for use by ISO downloaders
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 17:34:34 +00:00
7a57c678c2 Move TS to a separate deploy
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 16:26:53 +00:00
020d13ada2 Not optional lol
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:49:46 +00:00
f709eec878 Real sidecar perhaps with manual start
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:41:43 +00:00
3c3107b021 This is frustratingly necessary
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:27:13 +00:00
9cd9b6fdfe Fix version, renovate has our back
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:25:58 +00:00
c91aa4c42b This is frustratingly necessary
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:23:02 +00:00
a885f275a6 This is frustratingly necessary
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:16:50 +00:00
a7597c2b52 Preparing for magic
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:06:34 +00:00
187db433de Preparing for magic
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 11:05:40 +00:00
8893c46d72 Merge pull request 'Update grafana/grafana Docker tag to v9.5.21' (#21) from renovate/grafana-grafana-9.x into main
Reviewed-on: #21
2025-03-05 10:49:06 +00:00
7f6b2e43f4 Let me shoot myself in the foot if I want to
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 10:11:50 +00:00
6e45ada258 Let's have a default ingress class, eh? (yaml was a mistake)
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 10:09:08 +00:00
eb0b528423 Let's have a default ingress class, eh?
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 10:08:08 +00:00
6c29998db6 We might need it in future, but not for now
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:56:09 +00:00
a1c4693c7c Running Drupal in k8s is a fools errand
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:51:49 +00:00
1152916cc3 reconciling reality
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:40:59 +00:00
6369b00e24 how are these not updated already
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:31:52 +00:00
9d0b2425a9 Remove old oldseries
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:29:07 +00:00
fae5c02ba6 Remove old oldseries
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:27:37 +00:00
29e535cb29 Allow insecure argo, as its not exposed anyway and would be fronted by ingress-nginx anyway. Allows for http webhooks.
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-03-05 09:19:04 +00:00
dab777f748 Merge pull request 'Update benbusby/whoogle-search Docker digest to 5bbb30f' (#16) from renovate/benbusby-whoogle-search into main
Reviewed-on: #16
2025-03-05 08:53:39 +00:00
1508b9a2db Merge pull request 'Update Helm release csi-driver-smb to v1.17.0' (#24) from renovate/csi-driver-smb-1.x into main
Reviewed-on: #24
2025-03-04 09:48:45 +00:00
Renovate bot
c98cdcee70 Update Helm release csi-driver-smb to v1.17.0 2025-03-04 00:01:52 +00:00
82aea15631 Merge pull request 'Update esphome/esphome Docker tag to v2022.12.8' (#20) from renovate/esphome-esphome-2022.x into main
Reviewed-on: #20
2025-03-03 09:32:35 +00:00
a1f49f6370 Up-to-date forgejo 2025-03-03 09:23:24 +00:00
8caba49f9f Nearly up-to-date 2025-03-03 09:21:03 +00:00
320f75ddc1 Only two more major releases to go... 2025-03-03 09:14:23 +00:00
6eca7e00e0 One step forward, let's see how this upgrade works 2025-03-03 09:09:35 +00:00
52120be95f Upgrade to last old version 2025-03-03 09:01:31 +00:00
Renovate bot
8987b8d2a0 Update grafana/grafana Docker tag to v9.5.21 2025-03-03 00:01:47 +00:00
Renovate bot
df4641125e Update esphome/esphome Docker tag to v2022.12.8 2025-03-03 00:01:46 +00:00
Renovate bot
7504c4dbb7 Update eclipse-mosquitto Docker tag to v1.6.15 2025-03-02 00:01:44 +00:00
Renovate bot
e5ac982346 Update 1password/connect-sync Docker tag to v1.7.3 2025-03-02 00:01:43 +00:00
Renovate bot
75d0c94c40 Update 1password/connect-api Docker tag to v1.7.3 2025-03-01 00:01:30 +00:00
Renovate bot
88f6961dc1 Update benbusby/whoogle-search Docker digest to 5bbb30f 2025-03-01 00:01:28 +00:00
6a919ad4c2 Move to the official lms docker image and upgrade
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-02-28 15:00:09 +00:00
053f6b6b40 And the random manifests, and the helm templates in case any hardcoded stuff is left
Signed-off-by: Martyn Ranyard <m@rtyn.berlin>
2025-02-28 15:00:09 +00:00
ea7b456d9f Merge pull request 'Update Helm release node-feature-discovery to v0.17.2' (#12) from renovate/node-feature-discovery-0.x into main
Reviewed-on: #12
2025-02-28 14:58:53 +00:00
Renovate bot
7ffb96959a Update Helm release node-feature-discovery to v0.17.2 2025-02-28 14:58:02 +00:00
73d06a7338 Merge pull request 'Update codercom/code-server Docker tag to v4.97.2' (#11) from renovate/codercom-code-server-4.x into main
Reviewed-on: #11
2025-02-28 14:56:58 +00:00
Renovate bot
d38a784ad8 Update codercom/code-server Docker tag to v4.97.2 2025-02-28 14:56:38 +00:00
fe47d6faa2 Merge pull request 'Update Helm release cert-manager to v1.17.1' (#14) from renovate/cert-manager-1.x into main
Reviewed-on: #14
2025-02-28 14:43:48 +00:00
Renovate bot
65f6d756e4 Update Helm release cert-manager to v1.17.1 2025-02-28 14:41:35 +00:00
55 changed files with 385 additions and 549 deletions

View file

@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: codercom/code-server
tag: '4.93.1'
tag: '4.97.2'
pullPolicy: Always
secret:

View file

@ -46,7 +46,7 @@ spec:
value: "8080"
- name: OP_LOG_LEVEL
value: info
image: 1password/connect-api:1.7.2
image: 1password/connect-api:1.7.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
@ -92,7 +92,7 @@ spec:
value: localhost:11220
- name: OP_LOG_LEVEL
value: info
image: 1password/connect-sync:1.7.2
image: 1password/connect-sync:1.7.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3

View file

@ -24,3 +24,11 @@ patches:
version: v1
kind: ConfigMap
name: argocd-cm
- patch: |-
- op: add
path: /spec/template/spec/containers/0/args/-
value: --insecure=true
target:
kind: Deployment
name: argocd-server

View file

@ -33,8 +33,6 @@ spec:
mountPath: /config
- name: series
mountPath: /series
- name: oldseries
mountPath: /oldseries
- name: films
mountPath: /films
volumes:
@ -44,9 +42,6 @@ spec:
- name: series
persistentVolumeClaim:
claimName: smb-series
- name: oldseries
persistentVolumeClaim:
claimName: smb-oldseries
- name: films
persistentVolumeClaim:
claimName: smb-films

View file

@ -12,18 +12,6 @@ spec:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-oldseries
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: smb-oldseries
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-films
spec:
@ -32,4 +20,4 @@ spec:
resources:
requests:
storage: 1Gi
storageClassName: smb-films
storageClassName: smb-films

View file

@ -17,7 +17,7 @@ spec:
- env:
- name: ESPHOME_DASHBOARD_USE_PING
value: "true"
image: esphome/esphome:2022.12.3
image: esphome/esphome:2022.12.8
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3

View file

@ -0,0 +1,12 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
template:
spec:
containers:
- name: external-dns
envFrom:
- secretRef:
name: dnsimple-auth

View file

@ -3,12 +3,16 @@ kind: Kustomization
helmCharts:
- name: external-dns
repo: https://charts.bitnami.com/bitnami
version: 6.28.4
repo: https://kubernetes-sigs.github.io/external-dns
version: 1.15.2
releaseName: external-dns
namespace: external-dns
valuesInline:
provider: dnsimple
provider:
name: dnsimple
txtPrefix: armnleg
sources: [service,ingress]
extraEnvVarsSecret: dnsimple-auth
patches:
- path: deploy.yaml
target:
kind: Deployment
name: external-dns

View file

@ -0,0 +1,23 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/target: armnleg.martyn.berlin
nginx.ingress.kubernetes.io/proxy-body-size: 700m
name: web-s3
spec:
ingressClassName: nginx
rules:
- host: files.martyn.berlin
http:
paths:
- backend:
service:
name: s3-nginx
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- files.martyn.berlin

View file

@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: s3-nginx
name: s3-nginx
spec:
replicas: 1
selector:
matchLabels:
app: s3-nginx
template:
metadata:
creationTimestamp: null
labels:
app: s3-nginx
spec:
containers:
- image: nginx:1.27.4
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: s3
volumes:
- name: s3
persistentVolumeClaim:
claimName: smb-s3

View file

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-s3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: smb-s3

View file

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
labels:
app: s3-nginx
name: s3-nginx
spec:
internalTrafficPolicy: Cluster
ports:
- port: 80
selector:
app: s3-nginx

View file

@ -19,7 +19,7 @@ spec:
app: forgejo
spec:
containers:
- image: codeberg.org/forgejo/forgejo:1.21
- image: codeberg.org/forgejo/forgejo:10.0.1
env:
- name: FORGEJO__database__DB_TYPE
value: postgres

View file

@ -66,7 +66,7 @@ spec:
dnsPolicy: ClusterFirst
nodeSelector:
intel.feature.node.kubernetes.io/gpu: "true"
kubernetes.io/hostname: talos-llu-kx3
kubernetes.io/hostname: talos-e48-wv7
terminationGracePeriodSeconds: 30
volumes:
- name: jellyfin-config

View file

@ -32,7 +32,7 @@ spec:
automountServiceAccountToken: false
containers:
- env: []
image: grafana/grafana:9.5.3
image: grafana/grafana:9.5.21
name: grafana
ports:
- containerPort: 3000

View file

@ -18,7 +18,7 @@ spec:
app: lidarr
spec:
containers:
- image: hotio/lidarr:release
- image: hotio/lidarr:release-2.9.6.4552
name: lidarr
resources:
requests:

View file

@ -12,18 +12,6 @@ spec:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: smb-oldmusic
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: smb-oldmusic
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: logitech-media-server-config
spec:

View file

@ -19,7 +19,7 @@ spec:
app.kubernetes.io/name: logitech-media-server
spec:
containers:
- image: doliana/logitech-media-server:2021_11_06-8.2.0
- image: lmscommunity/lyrionmusicserver:9.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 10
@ -61,27 +61,14 @@ spec:
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /srv/squeezebox
name: config
- mountPath: /smbmusic
name: smbmusic
- mountPath: /smboldmusic
name: smboldmusic
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- -c
- mkdir /smbmusic; mkdir -pv /config/playlists /config/config; chown -Rc 1000:1000
/config
image: doliana/logitech-media-server:2023_04_15-8.3.1
imagePullPolicy: IfNotPresent
name: init-config
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config
name: config
- mountPath: /music
name: smbmusic
- mountPath: /playlist
name: config
subPath: playlist
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
@ -92,9 +79,6 @@ spec:
- name: smbmusic
persistentVolumeClaim:
claimName: smb-music
- name: smboldmusic
persistentVolumeClaim:
claimName: smb-oldmusic
updateStrategy:
rollingUpdate:
partition: 0

View file

@ -21,7 +21,7 @@ spec:
app.kubernetes.io/name: mosquitto
spec:
containers:
- image: eclipse-mosquitto:1.6.12
- image: eclipse-mosquitto:1.6.15
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3

View file

@ -10,7 +10,7 @@ helmCharts:
includeCRDs: true
namespace: node-feature-discovery
releaseName: node-feature-discovery
version: 0.16.6
version: 0.17.2
repo: https://kubernetes-sigs.github.io/node-feature-discovery/charts
valuesInLine:
master:

View file

@ -10,7 +10,7 @@ helmCharts:
includeCRDs: true
namespace: nvidia-device-plugin
releaseName: nvidia-device-plugin
version: 0.17.0
version: 0.17.1
repo: https://nvidia.github.io/k8s-device-plugin
valuesInline:
nodeSelector: "feature.node.kubernetes.io/pci-0300_10de_13c0_1569_13c0.present=true"

View file

@ -0,0 +1,62 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: paperless-ngx
repo: https://charts.gabe565.com
version: 0.24.1
releaseName: paperless-ngx
namespace: paperless-ngx
valuesInline:
persistence:
data:
enabled: "true"
size: "1Gi"
accessMode: ReadWriteOnce
storageClass: "longhorn-fast"
media:
enabled: "true"
size: "8Gi"
accessMode: ReadWriteOnce
storageClass: "longhorn-fast"
export:
enabled: "true"
size: "1Gi"
accessMode: ReadWriteOnce
storageClass: "longhorn-fast"
consume:
enabled: "true"
size: "1Gi"
accessMode: ReadWriteOnce
storageClass: "smb-scans"
service:
main:
type: LoadBalancer
annotations:
external-dns.alpha.kubernetes.io/hostname: "paperless.martyn.berlin"
ports:
http:
port: 8080
postgresql:
enabled: "true"
primary:
persistence:
enabled: "true"
storageClass: "longhorn-fast"
env:
TZ: "Europe/Berlin"
resources:
requests:
cpu: "25m"
memory: "511772986"
patches:
- target:
kind: Service
name: paperless-ngx
patch: |-
- op: replace
path: /spec/ports
value: [{"name":"http","port":80,"targetPort":"http","protocol":"TCP"}]

View file

@ -16,7 +16,7 @@ spec:
app: prowlarr
spec:
containers:
- image: hotio/prowlarr:release-1.26.1.4844
- image: hotio/prowlarr:release-1.31.2.4975
imagePullPolicy: Always
name: prowlarr
ports:

View file

@ -18,7 +18,7 @@ spec:
app: radarr
spec:
containers:
- image: hotio/radarr:release-5.18.4.9674
- image: hotio/radarr:release-5.19.3.9730
imagePullPolicy: IfNotPresent
name: radarr
ports:

View file

@ -1,9 +0,0 @@
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool-ssd
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 2

View file

@ -1,47 +0,0 @@
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: ssd-cluster
namespace: rook-ceph # namespace:cluster
spec:
dataDirHostPath: /var/lib/rook-cluster-ssd
cephVersion:
image: quay.io/ceph/ceph:v18
allowUnsupported: true
mon:
count: 1
allowMultiplePerNode: true
mgr:
count: 1
allowMultiplePerNode: true
dashboard:
enabled: true
crashCollector:
disable: true
storage:
useAllNodes: false
useAllDevices: false
#deviceFilter:
nodes:
- name: "talos-7oq-vur"
devices:
- name: "sda"
config:
osdsPerDevice: "1"
- name: "talos-iqd-ysy"
devices:
- name: "sda"
config:
osdsPerDevice: "1"
monitoring:
enabled: false
healthCheck:
daemonHealth:
mon:
interval: 45s
timeout: 600s
priorityClassNames:
all: system-node-critical
mgr: system-cluster-critical
disruptionManagement:
managePodBudgets: true

View file

@ -1,13 +0,0 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-config-override
namespace: rook-ceph # namespace:cluster
data:
config: |
[global]
osd_pool_default_size = 1
mon_warn_on_pool_no_redundancy = false
bdev_flock_retry = 20
bluefs_buffered_io = false
mon_data_avail_warn = 10

View file

@ -1,130 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-tools
namespace: rook-ceph # namespace:cluster
labels:
app: rook-ceph-tools
spec:
replicas: 1
selector:
matchLabels:
app: rook-ceph-tools
template:
metadata:
labels:
app: rook-ceph-tools
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: rook-ceph-tools
image: quay.io/ceph/ceph:v17.2.6
command:
- /bin/bash
- -c
- |
# Replicate the script from toolbox.sh inline so the ceph image
# can be run directly, instead of requiring the rook toolbox
CEPH_CONFIG="/etc/ceph/ceph.conf"
MON_CONFIG="/etc/rook/mon-endpoints"
KEYRING_FILE="/etc/ceph/keyring"
# create a ceph config file in its default location so ceph/rados tools can be used
# without specifying any arguments
write_endpoints() {
endpoints=$(cat ${MON_CONFIG})
# filter out the mon names
# external cluster can have numbers or hyphens in mon names, handling them in regex
# shellcheck disable=SC2001
mon_endpoints=$(echo "${endpoints}"| sed 's/[a-z0-9_-]\+=//g')
DATE=$(date)
echo "$DATE writing mon endpoints to ${CEPH_CONFIG}: ${endpoints}"
cat <<EOF > ${CEPH_CONFIG}
[global]
mon_host = ${mon_endpoints}
[client.admin]
keyring = ${KEYRING_FILE}
EOF
}
# watch the endpoints config file and update if the mon endpoints ever change
watch_endpoints() {
# get the timestamp for the target of the soft link
real_path=$(realpath ${MON_CONFIG})
initial_time=$(stat -c %Z "${real_path}")
while true; do
real_path=$(realpath ${MON_CONFIG})
latest_time=$(stat -c %Z "${real_path}")
if [[ "${latest_time}" != "${initial_time}" ]]; then
write_endpoints
initial_time=${latest_time}
fi
sleep 10
done
}
# read the secret from an env var (for backward compatibility), or from the secret file
ceph_secret=${ROOK_CEPH_SECRET}
if [[ "$ceph_secret" == "" ]]; then
ceph_secret=$(cat /var/lib/rook-ceph-mon/secret.keyring)
fi
# create the keyring file
cat <<EOF > ${KEYRING_FILE}
[${ROOK_CEPH_USERNAME}]
key = ${ceph_secret}
EOF
# write the initial config file
write_endpoints
# continuously update the mon endpoints if they fail over
watch_endpoints
imagePullPolicy: IfNotPresent
tty: true
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
env:
- name: ROOK_CEPH_USERNAME
valueFrom:
secretKeyRef:
name: rook-ceph-mon
key: ceph-username
volumeMounts:
- mountPath: /etc/ceph
name: ceph-config
- name: mon-endpoint-volume
mountPath: /etc/rook
- name: ceph-admin-secret
mountPath: /var/lib/rook-ceph-mon
readOnly: true
volumes:
- name: ceph-admin-secret
secret:
secretName: rook-ceph-mon
optional: false
items:
- key: ceph-secret
path: secret.keyring
- name: mon-endpoint-volume
configMap:
name: rook-ceph-mon-endpoints
items:
- key: data
path: mon-endpoints
- name: ceph-config
emptyDir: {}
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 5

View file

@ -1,17 +0,0 @@
apiVersion: ceph.rook.io/v1
kind: CephFilesystem
metadata:
name: ssdfs
namespace: rook-ceph
spec:
metadataPool:
replicated:
size: 1
dataPools:
- name: replicated
replicated:
size: 1
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true

View file

@ -1,24 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-cephfs-ssd
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.cephfs.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
clusterID: rook-ceph
fsName: ssdfs
# Ceph pool into which the image shall be created
pool: ssdfs-replicated
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete

View file

@ -1,32 +0,0 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block-ssd
annotations:
storageclass.kubernetes.io/is-default-class: true
# Change "rook-ceph" provisioner prefix to match the operator namespace if needed
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
# clusterID is the namespace where the rook cluster is running
clusterID: rook-ceph
# Ceph pool into which the RBD image shall be created
pool: replicapool-ssd
# RBD image format. Defaults to "2".
imageFormat: "2"
# RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature.
imageFeatures: layering
# The secrets contain Ceph admin credentials.
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
# Specify the filesystem type of the volume. If not specified, csi-provisioner
# will set default as `ext4`.
csi.storage.k8s.io/fstype: xfs
# Delete the rbd volume when a PVC is deleted
reclaimPolicy: Delete

View file

@ -38,7 +38,7 @@ metadata:
annotations:
configmap.reloader.stakater.com/reload: "ser2net"
spec:
replicas: 1
replicas: 0
strategy:
type: Recreate
selector:

View file

@ -4,7 +4,7 @@ metadata:
name: smb-films
provisioner: smb.csi.k8s.io
parameters:
source: "//172.20.0.125/films"
source: "//172.20.0.70/films"
csi.storage.k8s.io/node-stage-secret-name: smb-creds
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Retain

View file

@ -1,10 +1,10 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: smb-oldseries
name: smb-s3
provisioner: smb.csi.k8s.io
parameters:
source: "//hp40l/disk2/oldseries"
source: "//172.20.0.69/s3"
csi.storage.k8s.io/node-stage-secret-name: smb-creds
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Retain

View file

@ -4,7 +4,7 @@ metadata:
name: smb-series
provisioner: smb.csi.k8s.io
parameters:
source: "//hp40l/disk2/series"
source: "//172.20.0.70/series"
csi.storage.k8s.io/node-stage-secret-name: smb-creds
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
reclaimPolicy: Retain

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: tailscale-script
data:
script.sh: |
tailscaled --socks5-server=localhost:1055 --outbound-http-proxy-listen=localhost:1055 --tun=userspace-networking &
tailscale up --authkey=$TS_AUTHKEY --advertise-tags=tag:k8s --hostname k8s-tailscale-proxy &
sleep 3
echo "Waiting for $COUNTRY to do something"
while ! tailscale exit-node list 2>/dev/null | grep $COUNTRY >/dev/null; do echo -n . ;sleep 5; done
tailscale set --exit-node $(tailscale exit-node list | grep $COUNTRY | cut -f2 -d' ' | shuf | head -n1)
while true; do sleep 1; done

View file

@ -0,0 +1,100 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: tailscale-proxy
name: tailscale-proxy
spec:
replicas: 1
selector:
matchLabels:
app: tailscale-proxy
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: tailscale-proxy
spec:
containers:
- command:
- /bin/sh
- -c
- 'sh /script/script.sh'
env:
- name: TS_KUBE_SECRET
value: tailscale
- name: COUNTRY
value: Switzerland
- name: TS_AUTHKEY
valueFrom:
secretKeyRef:
key: TS_AUTHKEY
name: tailscale-auth
image: ghcr.io/tailscale/tailscale:v1.80.3
imagePullPolicy: IfNotPresent
startupProbe:
exec:
command:
- /bin/sh
- -c
- tailscale ip | grep ^100 > /dev/null
periodSeconds: 30
failureThreshold: 30
livenessProbe:
exec:
command:
- /bin/sh
- -c
- tailscale ip | grep ^100 > /dev/null
periodSeconds: 30
failureThreshold: 2
readinessProbe:
exec:
command:
- /bin/sh
- -c
- http_proxy=127.0.0.1:1055 wget -O- ifconfig.co/country 2>&1 | grep $COUNTRY > /dev/null
initialDelaySeconds: 60
periodSeconds: 60
failureThreshold: 3
name: tailscale
securityContext:
privileged: true
runAsGroup: 0
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-t4rzn
readOnly: true
- mountPath: /script
name: script
serviceAccount: tailscale
serviceAccountName: tailscale
volumes:
- name: script
configMap:
name: tailscale-script
- name: kube-api-access-t4rzn
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace

View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: tailscale

View file

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
labels:
app: tailscale-proxy
app.kubernetes.io/instance: tailscale-proxy
name: tailscale-proxy
spec:
ports:
- port: 1055
protocol: TCP
targetPort: 1055
selector:
app: tailscale-proxy
type: LoadBalancer

View file

@ -14,6 +14,7 @@ spec:
labels:
app: qbittorrent
spec:
initContainers:
containers:
- image: qbittorrentofficial/qbittorrent-nox:latest
name: qbittorrent
@ -34,40 +35,6 @@ spec:
value: "/config"
- name: QBT_DOWNLOADS
value: "/downloads"
- env:
- name: TS_KUBE_SECRET
value: tailscale
- name: TS_USERSPACE
value: "false"
- name: TS_OUTBOUND_HTTP_PROXY_LISTEN
value: "localhost:1055"
- name: TS_SOCKS5_SERVER
value: "localhost:1055"
- name: TS_EXTRA_ARGS
value: "--exit-node=100.90.55.121"
- name: TS_AUTHKEY
valueFrom:
secretKeyRef:
key: TS_AUTHKEY
name: tailscale-auth
optional: true
livenessProbe:
exec:
command:
- ping
- -c1
- 100.100.100.100
initialDelaySeconds: 120
periodSeconds: 5
image: ghcr.io/tailscale/tailscale:latest
name: ts-sidecar
securityContext:
runAsGroup: 1000
runAsUser: 1000
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-t4rzn
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
serviceAccountName: tailscale

View file

@ -24,7 +24,7 @@ spec:
value: en
- name: WHOOGLE_CONFIG_SEARCH_LANGUAGE
value: en
image: benbusby/whoogle-search@sha256:ecccdb598f890140bf5564ea0307d3a72871ab3d14fbf22e308b904846e5c590
image: benbusby/whoogle-search@sha256:5bbb30fc4cf67563b48529c5291813b3d49c290e1e8b9e3aaa5081e9cb6e40c0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3

View file

@ -8,4 +8,4 @@ spec:
storageClassName: longhorn-fast
resources:
requests:
storage: 128Mi
storage: 1280Mi

View file

@ -11,7 +11,7 @@ spec:
source:
chart: cilium
repoURL: https://helm.cilium.io/
targetRevision: 1.17.1
targetRevision: 1.17.2
helm:
parameters:
- name: ipam.mode

View file

@ -14,6 +14,10 @@ spec:
targetRevision: 4.12.0
helm:
parameters:
- name: controller.ingressClassResource.default
value: "true"
- name: controller.config.annotations-risk-level
value: "Critical"
- name: controller.service.type
value: LoadBalancer
- name: controller.allowSnippetAnnotations
@ -33,6 +37,11 @@ spec:
more_set_headers -a "X-Robots-Tag: anthropic-ai: none";
more_set_headers -a "X-Robots-Tag: CCBot: none";
more_set_headers -a "X-Robots-Tag: semrushbot: none";
more_set_headers -a "X-Robots-Tag: Amazonbot: none";
more_set_headers -a "X-Robots-Tag: dotbot: none";
more_set_headers -a "X-Robots-Tag: AhrefsBot: none";
- name: controller.config.block-user-agents
value: "~*Amazonbot,~*SemrushBot,~*DotBot,~*Ahrefsbot,~*GPT"
syncPolicy:
automated:
selfHeal: true

View file

@ -11,7 +11,7 @@ spec:
source:
chart: cert-manager
repoURL: https://charts.jetstack.io
targetRevision: v1.13.1
targetRevision: v1.17.1
helm:
parameters:
- name: installCRDs

View file

@ -11,7 +11,7 @@ spec:
source:
chart: secrets-store-csi-driver
repoURL: https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
targetRevision: 1.3.4
targetRevision: 1.4.8
syncPolicy:
automated:
selfHeal: true

View file

@ -1,44 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: bikerwitch
namespace: argocd
spec:
destination:
namespace: bikerwitch
server: https://kubernetes.default.svc
project: default
source:
helm:
parameters:
- name: service.type
value: LoadBalancer
- name: persistence.enabled
value: "true"
- name: persistence.storageClass
value: "longhorn-fast"
- name: image.repository
value: drupal
- name: image.tag
value: 9.4-php8.0-apache
values: |-
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
external-dns.alpha.kubernetes.io/target: armnleg.martyn.berlin
hosts:
- host: bikerwitch.martyn.berlin
paths:
- /
- host: www.bikerwitch.org.uk
paths:
- /
tls:
- hosts:
- bikerwitch.martyn.berlin
- www.bikerwitch.org.uk
path: apps-helm/drupal
repoURL: https://git.martyn.berlin/martyn/infra4talos.git
targetRevision: HEAD

View file

@ -11,7 +11,7 @@ spec:
source:
chart: csi-driver-smb
repoURL: https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
targetRevision: v1.13.0
targetRevision: v1.17.0
syncPolicy:
automated:
selfHeal: true

View file

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: files-web
namespace: argocd
spec:
destination:
namespace: files-web
server: https://kubernetes.default.svc
project: apps
source:
path: apps-kustomized/files-web
repoURL: https://git.martyn.berlin/martyn/infra4talos
targetRevision: HEAD
syncPolicy:
automated:
selfHeal: true

View file

@ -1,65 +0,0 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: garage
namespace: argocd
spec:
destination:
namespace: garage
server: https://kubernetes.default.svc
project: infra
source:
helm:
valuesObject:
service:
type: LoadBalancer
persistence:
enabled: "true"
meta:
storageClass: longhorn-fast
data:
storageClass: longhorn-spinny
size: "1Gi"
deployment:
replicaCount: "1"
garage:
replicationMode: "1"
s3:
api:
rootDomain: ".s3.files.martyn.berlin"
web:
rootDomain: ".martyn.berlin"
ingress:
s3:
api:
className: "nginx"
enabled: "true"
annotations:
external-dns.alpha.kubernetes.io/target: armnleg.martyn.berlin
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/proxy-body-size: "700m"
tls:
- hosts:
- "s3.files.martyn.berlin"
hosts:
- host: s3.files.martyn.berlin
paths:
- path: /
pathType: Prefix
web:
className: "nginx"
enabled: "true"
annotations:
external-dns.alpha.kubernetes.io/target: armnleg.martyn.berlin
cert-manager.io/cluster-issuer: letsencrypt
tls:
- hosts:
- "files.martyn.berlin"
hosts:
- host: files.martyn.berlin
paths:
- path: /
pathType: Prefix
path: script/helm/garage
repoURL: https://git.deuxfleurs.fr/Deuxfleurs/garage
targetRevision: HEAD

View file

@ -31,7 +31,7 @@ spec:
annotations:
external-dns.alpha.kubernetes.io/hostname: nodered.martyn.berlin
repoURL: https://k8s-at-home.com/charts/
targetRevision: 5.3.1
targetRevision: 5.4.0
syncPolicy:
automated:
selfHeal: true

View file

@ -9,57 +9,9 @@ spec:
server: https://kubernetes.default.svc
project: apps
source:
chart: paperless-ngx
helm:
parameters:
- name: service.main.type
value: "LoadBalancer"
- name: persistence.data.enabled
value: "true"
- name: persistence.data.size
value: "1Gi"
- name: persistence.data.accessMode
value: ReadWriteOnce
- name: persistence.data.storageClass
value: "longhorn-fast"
- name: persistence.media.enabled
value: "true"
- name: persistence.media.size
value: "8Gi"
- name: persistence.media.accessMode
value: ReadWriteOnce
- name: persistence.media.storageClass
value: "longhorn-fast"
- name: persistence.export.enabled
value: "true"
- name: persistence.export.size
value: "1Gi"
- name: persistence.export.accessMode
value: ReadWriteOnce
- name: persistence.export.storageClass
value: "longhorn-fast"
- name: persistence.consume.enabled
value: "true"
- name: persistence.consume.size
value: "1Gi"
- name: persistence.consume.accessMode
value: ReadWriteOnce
- name: persistence.consume.storageClass
value: "smb-scans"
- name: postgresql.enabled
value: "true"
- name: postgresql.primary.persistence.enabled
value: "true"
- name: postgresql.primary.persistence.storageClass
value: "longhorn-fast"
- name: env.TZ
value: "Europe/Berlin"
- name: resources.requests.cpu
value: "25m"
- name: resources.requests.memory
value: "511772986"
repoURL: https://charts.gabe565.com
targetRevision: 0.7.8
path: apps-kustomized/paperless-ngx
repoURL: https://git.martyn.berlin/martyn/infra4talos
targetRevision: HEAD
syncPolicy:
automated:
selfHeal: true

View file

@ -44,6 +44,9 @@ spec:
- name: scans
size: 1Gi
storageClass: longhorn-fast
- name: s3
size: 20Gi
storageClass: longhorn-fast
path: apps-helm/samba4
repoURL: https://git.martyn.berlin/martyn/infra4talos.git
targetRevision: HEAD

View file

@ -0,0 +1,17 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tailscale-proxy
namespace: argocd
spec:
destination:
namespace: tailscale-proxy
server: https://kubernetes.default.svc
project: apps
source:
path: apps-kustomized/tailscale-proxy
repoURL: https://git.martyn.berlin/martyn/infra4talos
targetRevision: HEAD
syncPolicy:
automated:
selfHeal: true

View file

@ -2,5 +2,11 @@
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"argocd": {
"fileMatch": ["everything-app/.+\\.yaml$"]
},
"kubernetes": {
"fileMatch": [
"apps-kustomized/.+\\.yaml$",
"apps-helm/.+/tempates/.+\\.yaml"
]
}
}