Camel K 是针对 Serverless 和 微服务架构设计的,能够直接部署在 Kubernetes, Knative 或者 OpenShift 集群上的集成框架。
目前互联网上关于 Camel K 的资料较少,本人参考官方文档慢慢摸索,并对过程进行记录。
不过本次基于camel-k:1.0.0-RC2
的实践未成功
官方文档:Camel K - Apache Camel
GitHub: camel-k
本文部署尚未成功,部署成功的实践过程请参考文章:
部署 Apache Camel K | 从 master 分支源码构建并部署 Camel K 平台
部署 Apache Camel K | 从 master 分支源码构建并部署 Camel K 平台
开始写这篇笔记的时候,camel-k 的版本是 1.0.0-RC2
,搭建过程中踩到很多坑:
1.0.0-RC2
的 Registry 鉴权逻辑存在问题buildah
不明原因构建报错2 个节点 K8S 集群
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hyper-sia Ready master 10d v1.17.3 192.168.3.200 Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.6
hyper-tesla Ready 9d v1.17.3 192.168.3.201 Ubuntu 18.04.4 LTS 4.15.0-88-generic docker://19.3.7
Camel K 需要部署在 Kubernetes 或者 OpenShift 集群,由于集群搭建的资料较多,此处不再赘述。
如果基于以下类型的集群,需要进行一些特定的配置,链接如下:
kamel 是用来配置集群、运行集成配置的一个命令行工具,可以在 Release 页面下载编译好的 kamel 可执行二进制。
可以将 kamel 二进制可执行文件放到目录 /usr/local/bin/
目录下
如果不指定 namespace,kamel 会使用默认 namespace,为了便于管理可以新建一个 namespace。
kubectl create ns camel-k
创建一个配置文件 camel-k-namespace.yml
apiVersion: v1
kind: Namespace
metadata:
name: camel-k
labels:
name: camel-k
应用配置
kubectl apply -f camel-k-namespace.yml
执行后,namespace 创建成功。
执行命令
kamel install
或者指定 namespace
kamel install -n camel-k
由于没有指定 registry,会看到输出结果:
Error: cannot find automatically a registry where to push images
拉取镜像需要时间,时长受网络因素印象,该 pod 的 STATUS 会有一段时间的 ContainerCreating。
可以看到已经创建了对应的 pod:
NAME READY STATUS RESTARTS AGE
camel-k-operator-7dd8bd88b5-cftst 1/1 Running 0 16s
由于 Camel k 需要推镜像到 registry,所以在没有配置 registry 的情况下,运行任何 integration 配置将会一直卡在 “Waiting for Platform”
参考官方文档:Uninstalling Camel K
OpenShift:
oc delete all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd -l 'app=camel-k' -n camel-k
Kubernetes:
kubectl delete all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd -l 'app=camel-k' -n camel-k
kamel install
需要指定 registry,可以指定参数,以 docker.io 为例:
kamel install --registry docker.io --organization your-user-id-or-org --registry-auth-username your-user-id --registry-auth-password your-password
Registry 的选择需要视网络环境决定,如果没有现成的网络条件良好的 Registry,可以临时搭建。
本次实践采用临时搭建 registry:
docker run -d -p 5000:5000 --restart always --name registry registry:2
临时搭建的 Registry 没有 SSL,需要在 /etc/docker/daemon.json
中指定 insecure-registries
,配置示例:
{
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 20,
"insecure-registries": [
"hyper-sia.lo:5000"
]
}
可以基于文件:
https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/registry/registry-rc.yaml.tmpl
registry-rc.yaml.tmpl
apiVersion: v1
kind: ReplicationController
metadata:
labels:
kubernetes.io/minikube-addons: registry
addonmanager.kubernetes.io/mode: Reconcile
name: registry
namespace: kube-system
spec:
replicas: 1
selector:
kubernetes.io/minikube-addons: registry
template:
metadata:
labels:
actual-registry: "true"
kubernetes.io/minikube-addons: registry
addonmanager.kubernetes.io/mode: Reconcile
spec:
containers:
- image: registry.hub.docker.com/library/registry:2.7.1
imagePullPolicy: IfNotPresent
name: registry
ports:
- containerPort: 5000
protocol: TCP
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
基于链接执行命令 :
kubectl create -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/registry/registry-rc.yaml.tmpl -n kube-system
或者基于本地文件:
kubectl create -f registry-rc.yaml.tmpl -n kube-system
创建 Service,文件 registry-svc.yaml
:
apiVersion: v1
kind: Service
metadata:
name: registry
namespace: kube-system
labels:
kubernetes.io/minikube-addons: registry
kubernetes.io/cluster-service: "true"
spec:
selector:
kubernetes.io/minikube-addons: registry
ports:
- name: registry
port: 5000
protocol: TCP
执行:
kubectl create -f registry-svc.yaml -n kube-system
受网络条件限制,本人的网络访问中央仓库相对较慢,因此使用 settings.xml
配置镜像仓库:
settings.xml
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<mirrors>
<mirror>
<id>alimavenid>
<name>aliyun mavenname>
<url>http://maven.aliyun.com/nexus/content/groups/public/url>
<mirrorOf>centralmirrorOf>
mirror>
mirrors>
<repositories>
<repository>
<id>springid>
<url>https://maven.aliyun.com/repository/springurl>
<releases>
<enabled>trueenabled>
releases>
<snapshots>
<enabled>trueenabled>
snapshots>
repository>
<repository>
<id>apache-snapshotsid>
<url>https://maven.aliyun.com/repository/apache-snapshotsurl>
<releases>
<enabled>trueenabled>
releases>
<snapshots>
<enabled>trueenabled>
snapshots>
repository>
repositories>
settings>
使用 settings.xml
创建一个 configmap:
kubectl create configmap maven-settings --from-file=settings.xml -n camel-k
注意
--maven-settings
和 --maven-repository
这两个参数互斥,只能二选一
执行 install 命令:
hyper-sia.lo:5000
https://maven.aliyun.com/nexus/content/groups/public/
(貌似配了后,大部分依赖还是会从 Apache 仓库下载,只有少部分会走配置的仓库)configmap:maven-settings/settings.xml
1h
(一定要配这个,默认的 5m
只适用于网络与硬件条件极好的状态,否则 maven 构建时间超过 3m45s
后就会被 signal killed,这个问题坑了我一天)openjdk:8
1kamel install --base-image openjdk:8 --registry hyper-sia.lo:5000 --registry-insecure --build-timeout 1h --maven-settings=configmap:maven-settings/settings.xml --save -n camel-k
kamel install --maven-settings=configmap:maven-settings/settings.xml --build-timeout 1h -n camel-k
Registry 配置完成后,可以尝试运行示例了:
准备示例文件 Sample.java
import org.apache.camel.builder.RouteBuilder;
public class Sample extends RouteBuilder {
@Override
public void configure() throws Exception {
from("timer:tick")
.log("Hello Camel K!");
}
}
--dev
等同于 -w --logs --sync
,可以看到日志
kamel run Sample.java --dev -n camel-k
此时,camel-k-kit 会处于 Init 状态,这时候可以通过 docker logs
看日志
NAME READY STATUS RESTARTS AGE
camel-k-kit-bpot9cr1tq80fuhuflvg-builder 0/1 Init:0/1 0 8m34s
camel-k-operator-7dd8bd88b5-vkkgc 1/1 Running 0 9m2s
在 docker 找到对应的 container 后调用 docker logs
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41b1f4c46d61 cd24319e0705 "kamel builder --nam…" 9 minutes ago Up 9 minutes k8s_builder_camel-k-kit-bpot9cr1tq80fuhuflvg-builder_camel-k_83d80565-5c06-43d7-b275-c446d45bae84_0
docker logs -f 41b1f4c46d61
可以看到日志输出都是 maven 构建过程,日志节选:
{"level":"info","ts":1584525135.2093751,"logger":"camel-k.builder","msg":"Go Version: go1.13.3"}
{"level":"info","ts":1584525135.20941,"logger":"camel-k.builder","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1584525135.209414,"logger":"camel-k.builder","msg":"Camel K Version: 1.0.0-RC2"}
{"level":"info","ts":1584525135.5656009,"logger":"camel-k.builder","msg":"steps: [github.com/apache/camel-k/pkg/builder/runtime/LoadCamelCatalog@0 github.com/apache/camel-k/pkg/builder/CleanBuildDir@9 github.com/apache/camel-k/pkg/builder/runtime/GenerateProject@10 github.com/apache/camel-k/pkg/builder/GenerateProjectSettings@11 github.com/apache/camel-k/pkg/builder/InjectDependencies@12 github.com/apache/camel-k/pkg/builder/SanitizeDependencies@13 github.com/apache/camel-k/pkg/builder/runtime/ComputeDependencies@20 github.com/apache/camel-k/pkg/builder/IncrementalImageContext@30]"}
{"level":"info","ts":1584525135.5656424,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/runtime/LoadCamelCatalog","phase":0,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.572114,"logger":"camel-k.builder","msg":"step done in 0.006464 seconds","step":"github.com/apache/camel-k/pkg/builder/runtime/LoadCamelCatalog","phase":0,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.572155,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/CleanBuildDir","phase":9,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.572175,"logger":"camel-k.builder","msg":"step done in 0.000015 seconds","step":"github.com/apache/camel-k/pkg/builder/CleanBuildDir","phase":9,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5721788,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/runtime/GenerateProject","phase":10,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5721836,"logger":"camel-k.builder","msg":"step done in 0.000002 seconds","step":"github.com/apache/camel-k/pkg/builder/runtime/GenerateProject","phase":10,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5721865,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/GenerateProjectSettings","phase":11,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5737765,"logger":"camel-k.builder","msg":"step done in 0.001586 seconds","step":"github.com/apache/camel-k/pkg/builder/GenerateProjectSettings","phase":11,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5738053,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/InjectDependencies","phase":12,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5738494,"logger":"camel-k.builder","msg":"step done in 0.000038 seconds","step":"github.com/apache/camel-k/pkg/builder/InjectDependencies","phase":12,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5738537,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/SanitizeDependencies","phase":13,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5738566,"logger":"camel-k.builder","msg":"step done in 0.000000 seconds","step":"github.com/apache/camel-k/pkg/builder/SanitizeDependencies","phase":13,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.57386,"logger":"camel-k.builder","msg":"executing step","step":"github.com/apache/camel-k/pkg/builder/runtime/ComputeDependencies","phase":20,"name":"kit-bpouuj83g14ct1jabkq0","task":"builder"}
{"level":"info","ts":1584525135.5740788,"logger":"camel-k.maven","msg":"executing: mvn --batch-mode -Dmaven.repo.local=/tmp/artifacts/m2 --settings /builder/kit-bpouuj83g14ct1jabkq0/maven/settings.xml org.apache.camel.k:camel-k-maven-plugin:1.1.0:generate-dependency-list","timeout":"45m0s"}
[INFO] Scanning for projects...
[INFO] Downloading from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/camel/camel-bom/3.0.1/camel-bom-3.0.1.pom
[INFO] Downloaded from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/camel/camel-bom/3.0.1/camel-bom-3.0.1.pom (125 kB at 88 kB/s)
[INFO] Downloading from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/camel/camel/3.0.1/camel-3.0.1.pom
[INFO] Downloaded from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/camel/camel/3.0.1/camel-3.0.1.pom (10 kB at 17 kB/s)
[INFO] Downloading from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/apache/21/apache-21.pom
[INFO] Downloaded from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/apache/21/apache-21.pom (17 kB at 30 kB/s)
[INFO] Downloading from alimaven: http://maven.aliyun.com/nexus/content/groups/public/org/apache/camel/k/camel-k-runtime-bom/1.1.0/camel-k-runtime-bom-1.1.0.pom
构建完之后, camel-k-kit-bpot9cr1tq80fuhuflvg-builder
这个 pod 的状态就会从 Init:0/1
变为 Running
。
不说了,到这里我花了好几天,放弃了,从直接拉源码从头构建,不使用 Release 版本了:
部署 Apache Camel K | 从 master 分支源码构建并部署 Camel K 平台
不指定 --base-image
,采用默认值
STEP 1: FROM adoptopenjdk/openjdk8:slim
Getting image source signatures
Copying blob sha256:b6b53be908de2c0c78070fff0a9f04835211b3156c4e73785747af365e71a0d7
Copying blob sha256:de83a2304fa1f7c4a13708a0d15b9704f5945c2be5cbb2b3ed9b2ccb718d0b3d
Copying blob sha256:f9a83bce3af0648efaa60b9bb28225b09136d2d35d0bed25ac764297076dec1b
Copying blob sha256:4d679ae892a6eb412427ad0c21b73f14de1311f5ad671b9cf7c66ee24b4b67e8
Copying blob sha256:423ae2b273f4c17ceee9e8482fa8d071d90c7d052ae208e1fe4963fceb3d6954
Copying blob sha256:881b78aa6eae20710f3491bde3110259ae7a3ced4676a061357b37fd4860cc67
Copying blob sha256:741d43bb9b48d55ea245e4b34190a28e301b4e6c26647ed34fdd728f804697c5
error creating build container: The following failures happened while trying to pull image specified by "adoptopenjdk/openjdk8:slim" based on search reg istries in /etc/containers/registries.conf:
* "localhost/adoptopenjdk/openjdk8:slim": Error initializing source docker://localhost/adoptopenjdk/openjdk8:slim: error pinging docker registry localho st: Get http://localhost/v2/: dial tcp 127.0.0.1:80: connect: connection refused
* "docker.io/adoptopenjdk/openjdk8:slim": Error writing blob: error storing blob to file "/var/tmp/storage273321107/5": read tcp 10.240.0.67:44828->104. 18.124.25:443: read: connection reset by peer
* "registry.fedoraproject.org/adoptopenjdk/openjdk8:slim": Error initializing source docker://registry.fedoraproject.org/adoptopenjdk/openjdk8:slim: Err or reading manifest slim in registry.fedoraproject.org/adoptopenjdk/openjdk8: manifest unknown: manifest unknown
* "registry.access.redhat.com/adoptopenjdk/openjdk8:slim": Error initializing source docker://registry.access.redhat.com/adoptopenjdk/openjdk8:slim: Err or reading manifest slim in registry.access.redhat.com/adoptopenjdk/openjdk8: name unknown: Repo not found
* "registry.centos.org/adoptopenjdk/openjdk8:slim": Error initializing source docker://registry.centos.org/adoptopenjdk/openjdk8:slim: Error reading man ifest slim in registry.centos.org/adoptopenjdk/openjdk8: manifest unknown: manifest unknown
* "quay.io/adoptopenjdk/openjdk8:slim": Error initializing source docker://quay.io/adoptopenjdk/openjdk8:slim: Error reading manifest slim in quay.io/ad optopenjdk/openjdk8: unauthorized: access to the requested resource is not authorized
level=error msg="exit status 1"
--base-image openjdk:8
情况 1:
STEP 1: FROM openjdk:8
Getting image source signatures
Copying blob sha256:55769680e8277a4ff083d05f0993d1483b3d26b93a8814cf3c6f04fe5975ffa0
Copying blob sha256:5943eea6cb7c64e2000d0817410b37368b8307b639909cd590069738adee74d5
Copying blob sha256:dd8c6d374ea51e3dd671f71b28d025a7794ebea181b00838987d0b4d8a51372f
Copying blob sha256:c85513200d847a64a6e8f2cb714e2169f559b24b7736c586ff7b9aaedf71f410
Copying blob sha256:e27ce2095ec233759347c30234b114a10cfdd9871c8338738025aba71fe11701
Copying blob sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992
Copying blob sha256:3ed8ceae72a639e8b56c5a0022433947ff1c253ced28a3640fb81c641c3344f3
error creating build container: The following failures happened while trying to pull image specified by "openjdk:8" based on search registries in /etc/containers/registries.conf:
* "localhost/openjdk:8": Error initializing source docker://localhost/openjdk:8: error pinging docker registry localhost: Get http://localhost/v2/: dial tcp 127.0.0.1:80: connect: connection refused
* "docker.io/library/openjdk:8": Error writing blob: error storing blob to file "/var/tmp/storage721232074/7": read tcp 10.240.0.101:55232->104.18.121.25:443: read: connection reset by peer
* "registry.fedoraproject.org/openjdk:8": Error initializing source docker://registry.fedoraproject.org/openjdk:8: Error reading manifest 8 in registry.fedoraproject.org/openjdk: manifest unknown: manifest unknown
* "registry.access.redhat.com/openjdk:8": Error initializing source docker://registry.access.redhat.com/openjdk:8: Error reading manifest 8 in registry.access.redhat.com/openjdk: name unknown: Repo not found
* "registry.centos.org/openjdk:8": Error initializing source docker://registry.centos.org/openjdk:8: Error reading manifest 8 in registry.centos.org/openjdk: manifest unknown: manifest unknown
* "quay.io/openjdk:8": Error initializing source docker://quay.io/openjdk:8: Error reading manifest 8 in quay.io/openjdk: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "\n404 Not Found \nNot Found
\nThe requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
\n"
level=error msg="exit status 1"
情况 2:
STEP 1: FROM openjdk:8
Getting image source signatures
Copying blob sha256:dd8c6d374ea51e3dd671f71b28d025a7794ebea181b00838987d0b4d8a51372f
Copying blob sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992
Copying blob sha256:e27ce2095ec233759347c30234b114a10cfdd9871c8338738025aba71fe11701
Copying blob sha256:c85513200d847a64a6e8f2cb714e2169f559b24b7736c586ff7b9aaedf71f410
Copying blob sha256:5943eea6cb7c64e2000d0817410b37368b8307b639909cd590069738adee74d5
Copying blob sha256:3ed8ceae72a639e8b56c5a0022433947ff1c253ced28a3640fb81c641c3344f3
Copying blob sha256:55769680e8277a4ff083d05f0993d1483b3d26b93a8814cf3c6f04fe5975ffa0
Copying config sha256:cdf26cc71b50331364eb8081229b11ab90806546f3c7e618b7a4defb4d11726d
Writing manifest to image destination
Storing signatures
level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: permission denied"
error creating build container: The following failures happened while trying to pull image specified by "openjdk:8" based on search registries in /etc/containers/registries.conf:
* "localhost/openjdk:8": Error initializing source docker://localhost/openjdk:8: error pinging docker registry localhost: Get http://localhost/v2/: dial tcp 127.0.0.1:80: connect: connection refused
* "docker.io/library/openjdk:8": Error committing the finished image: error adding layer with blob "sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992": ApplyLayer exit status 1 stdout: stderr: permission denied
* "registry.fedoraproject.org/openjdk:8": Error initializing source docker://registry.fedoraproject.org/openjdk:8: Error reading manifest 8 in registry.fedoraproject.org/openjdk: manifest unknown: manifest unknown
* "registry.access.redhat.com/openjdk:8": Error initializing source docker://registry.access.redhat.com/openjdk:8: Error reading manifest 8 in registry.access.redhat.com/openjdk: name unknown: Repo not found
* "registry.centos.org/openjdk:8": Error initializing source docker://registry.centos.org/openjdk:8: Error reading manifest 8 in registry.centos.org/openjdk: manifest unknown: manifest unknown
* "quay.io/openjdk:8": Error initializing source docker://quay.io/openjdk:8: Error reading manifest 8 in quay.io/openjdk: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "\n404 Not Found \nNot Found
\nThe requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
\n"
level=error msg="exit status 1"
--base-image docker.io/openjdk:8
STEP 1: FROM docker.io/openjdk:8
Getting image source signatures
Copying blob sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992
Copying blob sha256:c85513200d847a64a6e8f2cb714e2169f559b24b7736c586ff7b9aaedf71f410
Copying blob sha256:dd8c6d374ea51e3dd671f71b28d025a7794ebea181b00838987d0b4d8a51372f
Copying blob sha256:5943eea6cb7c64e2000d0817410b37368b8307b639909cd590069738adee74d5
Copying blob sha256:e27ce2095ec233759347c30234b114a10cfdd9871c8338738025aba71fe11701
Copying blob sha256:3ed8ceae72a639e8b56c5a0022433947ff1c253ced28a3640fb81c641c3344f3
error creating build container: Error reading blob sha256:55769680e8277a4ff083d05f0993d1483b3d26b93a8814cf3c6f04fe5975ffa0: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/55/55769680e8277a4ff083d05f0993d1483b3d26b93a8814cf3c6f04fe5975ffa0/data?verify=1584588804-hLGXHAztfbzfsYBU4ctWAoSPBu8%3D: net/http: TLS handshake timeout
level=error msg="exit status 1"
builder Init 结束后报错,提示一下信息:
Error during unshare(CLONE_NEWUSER): Invalid argument
User namespaces are not enabled in /proc/sys/user/max_user_namespaces.
level=error msg="error parsing PID \"\": strconv.Atoi: parsing \"\": invalid syntax"
level=error msg="(unable to determine exit status)"
解决方案:
echo 2147483647 > /proc/sys/user/max_user_namespaces
修改 buildah 镜像中的 /etc/containers/registries.conf
文件,并 commit 成为新的镜像
STEP 1: FROM openjdk:8
Getting image source signatures
Copying blob sha256:dd8c6d374ea51e3dd671f71b28d025a7794ebea181b00838987d0b4d8a51372f
Copying blob sha256:55769680e8277a4ff083d05f0993d1483b3d26b93a8814cf3c6f04fe5975ffa0
Copying blob sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992
Copying blob sha256:c85513200d847a64a6e8f2cb714e2169f559b24b7736c586ff7b9aaedf71f410
Copying blob sha256:5943eea6cb7c64e2000d0817410b37368b8307b639909cd590069738adee74d5
Copying blob sha256:e27ce2095ec233759347c30234b114a10cfdd9871c8338738025aba71fe11701
Copying blob sha256:3ed8ceae72a639e8b56c5a0022433947ff1c253ced28a3640fb81c641c3344f3
Copying config sha256:cdf26cc71b50331364eb8081229b11ab90806546f3c7e618b7a4defb4d11726d
Writing manifest to image destination
Storing signatures
level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: permission denied"
error creating build container: The following failures happened while trying to pull image specified by "openjdk:8" based on search registries in /etc/containers/registries.conf:
* "localhost/openjdk:8": Error initializing source docker://localhost/openjdk:8: error pinging docker registry localhost: Get http://localhost/v2/: dial tcp 127.0.0.1:80: connect: connection refused
* "hyper-sia.lo:5000/openjdk:8": Error committing the finished image: error adding layer with blob "sha256:50e431f790939a2f924af65084cc9d39c3d3fb9ad2d57d183b7eadf86ea46992": ApplyLayer exit status 1 stdout: stderr: permission denied
level=error msg="exit status 1"
GitHub Issue: unsupported secret type for registry authentication #1360
源码版本: camel-k:1.0.0-RC2
trait/builder.go
源码节选
var (
plainDockerBuildahRegistrySecret = registrySecret{
fileName: "config.json",
mountPath: "/buildah/.docker",
destination: "config.json",
refEnv: "REGISTRY_AUTH_FILE",
}
buildahRegistrySecrets = []registrySecret{
plainDockerBuildahRegistrySecret,
}
)
var (
gcrKanikoRegistrySecret = registrySecret{
fileName: "kaniko-secret.json",
mountPath: "/secret",
destination: "kaniko-secret.json",
refEnv: "GOOGLE_APPLICATION_CREDENTIALS",
}
plainDockerKanikoRegistrySecret = registrySecret{
fileName: "config.json",
mountPath: "/kaniko/.docker",
destination: "config.json",
}
standardDockerKanikoRegistrySecret = registrySecret{
fileName: corev1.DockerConfigJsonKey,
mountPath: "/kaniko/.docker",
destination: "config.json",
}
kanikoRegistrySecrets = []registrySecret{
gcrKanikoRegistrySecret,
plainDockerKanikoRegistrySecret,
standardDockerKanikoRegistrySecret,
}
)
func getRegistrySecretFor(e *Environment, registrySecrets []registrySecret) (registrySecret, error) {
secret := corev1.Secret{}
err := e.Client.Get(e.C, client.ObjectKey{Namespace: e.Platform.Namespace, Name: e.Platform.Status.Build.Registry.Secret}, &secret)
if err != nil {
return registrySecret{}, err
}
for _, k := range registrySecrets {
if _, ok := secret.Data[k.fileName]; ok {
return k, nil
}
}
return registrySecret{}, errors.New("unsupported secret type for registry authentication")
}
k8s.io/api/core/v1/types.go
源码节选
// DockerConfigKey is the key of the required data for SecretTypeDockercfg secrets
DockerConfigKey = ".dockercfg"
// DockerConfigJsonKey is the key of the required data for SecretTypeDockerConfigJson secrets
DockerConfigJsonKey = ".dockerconfigjson"
可以看到,buildah
没有查找 .dockercfg
或者 .dockerconfigjson
,而通过 kamel install
产生的 secret 的 key 为 .dockerconfigjson
,因此 unsupported secret type for registry authentication
修复 commit: fix(buildah): Support old Docker registry authentication format
版本:camel-k:1.0.0-RC2
文件:pkg/util/minishift/minishift.go
package minishift
import (
"context"
"strconv"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
k8sclient "sigs.k8s.io/controller-runtime/pkg/client"
"github.com/apache/camel-k/pkg/client"
)
const (
registryNamespace = "kube-system"
)
// FindRegistry returns the Minishift registry location if any
func FindRegistry(ctx context.Context, c client.Client) (*string, error) {
svcs := corev1.ServiceList{
TypeMeta: metav1.TypeMeta{
APIVersion: corev1.SchemeGroupVersion.String(),
Kind: "Service",
},
}
err := c.List(ctx, &svcs,
k8sclient.InNamespace(registryNamespace),
k8sclient.MatchingLabels{
"kubernetes.io/minikube-addons": "registry",
})
if err != nil {
return nil, err
}
if len(svcs.Items) == 0 {
return nil, nil
}
svc := svcs.Items[0]
ip := svc.Spec.ClusterIP
portStr := ""
if len(svc.Spec.Ports) > 0 {
port := svc.Spec.Ports[0].Port
if port > 0 && port != 80 {
portStr = ":" + strconv.FormatInt(int64(port), 10)
}
}
registry := ip + portStr
return ®istry, nil
}
自动发现 Registry 支持 Minikube,通过 label kubernetes.io/minikube-addons: registry
查找,因此可以手动为集群内的 Registry 添加标签伪装成 Minikube 的 Registry。
Error creating build container while building the Dockerfile from React Native: https://github.com/react-native-community/docker-android/blob/master/Dockerfile ↩︎