转载请注明出处:java中提交argo工作流
argo是一个建立在k8s框架之上的工作流调度工具。 详见
工作流workflow任务调度工具argo
需求是通过api来进行argo的调度脚本提交,不通过shell的手动提交方式。
argo的提交 通过argo sumbit命令如下:
argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
一般在linux的shell环境中提交。
在argo的官网上和github中翻了很久没找到相关api和客户端可以调用触发的文档说明。
官网
github
然后转换下思路。
argo调度服务其实也是k8s服务的一种,通过k8s的命令也能进行提交如下:
kubectl create -f https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
而k8s本身是有其他语言进行操作的客户端包的。
如果想要编写应用程序来使用 Kubernetes REST API,您并不需要自己实现 API 调用或者请求(或者响应)的类型,使用与您正在使用的编程语言对应的客户端库即可。
客户端库通常已经帮您处理了一些常见的任务,例如身份验证。如果 API 客户端运行在 Kubernetes 集群内,那么大多数的客户端库能够发现并使用 Kubernetes 服务账号来进行身份验证,或者通过 kubeconfig file 文件读取验证信息和 API Server 地址。
详情参考以下链接:
如何在多种编程语言中通过客户端库来使用 Kubernetes API
kubernetes官方版java客户端
kubernetes社区版维护的java客户端fabric8io
我们使用的yaml文件进行测试实验如下:
hello-k8s.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-world
namespace: default
spec:
restartPolicy: OnFailure
containers:
- name: hello
image: "ubuntu"
command: ["/bin/echo","hello”,”world"]
hello-argo.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-parameters-
namespace: default
spec:
# invoke the whalesay template with
# "hello world" as the argument
# to the message parameter
entrypoint: whalesay
arguments:
parameters:
- name: message
value: hello
templates:
- name: whalesay
inputs:
parameters:
- name: message #parameter declaration
container:
# run cowsay with that message input parameter as args
image: docker/whalesay
command: [echo, "{{inputs.parameters.message}}"]
argo中提交使用
argo submit hello-argo.yaml -p message="goodbye"
发现message参数会自动传入 输出为: goodbye
但是kubectl命令模式则无法传参实现自动替换
kubectl create -f hello-argo.yaml
实际上需要对hello.yaml文件作物理替换如下:
sed -i "s/hello/goodbye/" hello-argo.yaml
kubectl create -f hello-argo.yaml
所以 我们使用java来调用时,也需要读文件 替换文件的内容。
io.fabric8
kubernetes-client
4.1.1
KubernetesClient client =new DefaultKubernetesClient();
这种方式会按以下优先级去加载配置:
1、系统属性
2、Environment variables
3、Kube配置文件
4、已经挂载的服务帐户令牌&证书
推荐使用系统属性
以下系统属性 & 环境变量 可以用于配置:
kubernetes.master/KUBERNETES_MASTER
kubernetes.api.version/KUBERNETES_API_VERSION
kubernetes.oapi.version/KUBERNETES_OAPI_VERSION
kubernetes.trust.certificates/KUBERNETES_TRUST_CERTIFICATES
kubernetes.certs.ca.file/KUBERNETES_CERTS_CA_FILE
kubernetes.certs.ca.data/KUBERNETES_CERTS_CA_DATA
kubernetes.certs.client.file/KUBERNETES_CERTS_CLIENT_FILE
kubernetes.certs.client.data/KUBERNETES_CERTS_CLIENT_DATA
kubernetes.certs.client.key.file/KUBERNETES_CERTS_CLIENT_KEY_FILE
kubernetes.certs.client.key.data/KUBERNETES_CERTS_CLIENT_KEY_DATA
kubernetes.certs.client.key.algo/KUBERNETES_CERTS_CLIENT_KEY_ALGO
kubernetes.certs.client.key.passphrase / KUBERNETES_CERTS_CLIENT_KEY_PASSPHRASE
kubernetes.auth.basic.username/KUBERNETES_AUTH_BASIC_USERNAME
kubernetes.auth.basic.password/KUBERNETES_AUTH_BASIC_PASSWORD
kubernetes.auth.tryKubeConfig/KUBERNETES_AUTH_TRYKUBECONFIG
kubernetes.auth.tryServiceAccount/KUBERNETES_AUTH_TRYSERVICEACCOUNT
kubernetes.auth.token/KUBERNETES_AUTH_TOKEN
kubernetes.watch.reconnectInterval/KUBERNETES_WATCH_RECONNECTINTERVAL
kubernetes.watch.reconnectLimit/KUBERNETES_WATCH_RECONNECTLIMIT
kubernetes.user.agent/KUBERNETES_USER_AGENT
kubernetes.tls.versions/KUBERNETES_TLS_VERSIONS
kubernetes.truststore.file/KUBERNETES_TRUSTSTORE_FILE
kubernetes.truststore.passphrase/KUBERNETES_TRUSTSTORE_PASSPHRASE
kubernetes.keystore.file/KUBERNETES_KEYSTORE_FILE
kubernetes.keystore.passphrase/KUBERNETES_KEYSTORE_PASSPHRASE
通过authtoken
ConfigBuilder builder = new ConfigBuilder();
if (authToken !=null ) {
builder.withOauthToken(authToken);
}
Config config = builder.build();
final KubernetesClient client = new DefaultKubernetesClient(config)
或者通过master url连接
先查询master url
[zzq@localhost ~]$ kubectl cluster-info
Kubernetes master is running at https://xxxx:8080
然后在java代码中
Config config = new ConfigBuilder().withMasterUrl("http://x.x.x.x:8080").build();
KubernetesClient client = new DefaultKubernetesClient(config);//使用默认的就足够了
在pod外部运行推荐通过MasterUrl的方式
在pod中运行操作会自动获取
KubernetesClient client = new DefaultKubernetesClient();
System.out.println("自动获取k8s配置");
if(!"auto".equals(masterurl)) {
Config config = new ConfigBuilder().withMasterUrl(masterurl).build();
client = new DefaultKubernetesClient(config);
System.out.println("自定义k8s配置:"+masterurl);
}
关于在pod中访问k8s,参考
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod
更多参数和client 初始化的源码参考
https://raw.githubusercontent.com/fabric8io/kubernetes-client/master/kubernetes-client/src/main/java/io/fabric8/kubernetes/client/Config.java
client.namespaces()
client.services()
client.pods()
client.customResources()
client.storage()
client.network()
创建:
Service service = client.services().inNamespace(namespace).create(service);
更新:
Namespace namespace = client.namespaces().withName(name).get();
//update resources
client.namespaces().createOrReplace(namespace);
查询:
ServiceList services = client.services().inNamespace("default").list();
Service service = client.services().inNamespace("default").withName("myservice").get()
删除:
client.services().inNamespace("default").withName("myservice").delete();
client.services().inNamespace("default").withName("myservice").edit()
.editMetadata()
.addToLabels("another", "label")
.endMetadata()
.done();
有些情况下,你希望从外部源读取资源,而不是使用客户端的DSL来定义资源。比如我们有一些比较复杂的yml,不希望重新构建一遍。 对于这些情况,客户端允许你从以下位置加载资源:
一个文件 (支持 java.io.File 和 java.lang.String)
url
输入流
一旦资源被加载,你就可以对它进行创建。
例如让我们从yml文件中读取一个Pod,并创建它:
Pod refreshed = client.load('/path/to/a/pod.yml').fromServer().get();
Boolean deleted = client.load('/workspace/pod.yml').delete();
LogWatch handle = client.load('/workspace/pod.yml').watchLog(System.out);
ArgoController.java
package com.biologic.api;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.core.io.ClassPathResource;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import com.biologic.api.service.LogService;
import io.fabric8.kubernetes.api.model.DoneablePod;
import io.fabric8.kubernetes.api.model.HasMetadata;
import io.fabric8.kubernetes.api.model.Pod;
import io.fabric8.kubernetes.api.model.PodList;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.dsl.NonNamespaceOperation;
import io.fabric8.kubernetes.client.dsl.PodResource;
@Controller
@RequestMapping(path = "/report/generation/argo")
public class ArgoController {
public final static String SECRET_TOKEN_ARGO = "123";
@Value("${env}")
private String env;
@Value("${kubernetes.master}")
private String masterurl;
@Autowired
private LogService logService;
@PostMapping(value = "/{chip}/quality-check")
@ResponseBody
public Object quality_check(@RequestParam(value = "token", required = true) String token,
@PathVariable("chip") String chip) {
if (!token.equals(SECRET_TOKEN_ARGO)) {
return "unauthorized";
}
logService.printK8sNow("当前环境 " + env);
String namespace = "default";
if ("sit".equals(env)) {
namespace = "sit";
}
try {
ClassPathResource yamlresource = new ClassPathResource("hello.yaml");
InputStream input=yamlresource.getInputStream();
StringBuffer sb = new StringBuffer();
readToBuffer(sb, input);
String fileContent = sb.toString();
String fileContent2 = fileContent.replaceFirst("value: hello", "value: " + chip);
System.out.println(fileContent2);
InputStream stream = new ByteArrayInputStream(fileContent2.getBytes());
Config config = new ConfigBuilder().withMasterUrl(masterurl).build();
KubernetesClient client = new DefaultKubernetesClient(config);
namespace = client.getNamespace();
List resources = client.load(stream).get();
if (resources.isEmpty()) {
System.err.println("No resources loaded from file: " +yamlresource.getPath());
return "No resources loaded from file: " +yamlresource.getPath();
}
HasMetadata resource = resources.get(0);
if (resource instanceof Pod){
Pod pod = (Pod) resource;
System.out.println("Creating pod in namespace " + pod.getMetadata().getNamespace());
NonNamespaceOperation> pods = client.pods().inNamespace(namespace);
Pod result = pods.create(pod);
System.out.println("Created pod " + result.getMetadata().getName());
} else {
System.err.println("Loaded resource is not a Pod! " + resource);
}
return fileContent2;
} catch (IOException e) {
e.printStackTrace();
System.out.println("argo未找到配置文件");
}
return "ok";
}
/**
* 将文本文件中的内容读入到buffer中
*
* @param buffer
* buffer
* @param filePath
* 文件路径
* @throws IOException
* 异常
* @author cn.outofmemory
* @date 2013-1-7
*/
private void readToBuffer(StringBuffer buffer, InputStream input) throws IOException {
String line; // 用来保存每行读取的内容
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
line = reader.readLine(); // 读取第一行
while (line != null) { // 如果 line 为空说明读完了
buffer.append(line); // 将读到的内容添加到 buffer 中
buffer.append("\n"); // 添加换行符
line = reader.readLine(); // 读取下一行
}
reader.close();
input.close();
}
}
触发api的方式为:
curl -d "token=123" http://localhost:9999/report/generation/argo/123/quality-check
成功提交创建pod如图所示
这里name为hello-world,下次再创建会报重名错误。
建议使用随机数替换一下后缀如下:
hello.yaml
name: hello-world-random
ArgoController
String fileContent2 = fileContent.replaceFirst("-random", "- " + Math.random());
argo类型的则要麻烦一些,因为defaultClient不支持argo类型的。
会报错No resource type found for:argoproj.io/v1alpha1
com.fasterxml.jackson.databind.JsonMappingException: No resource type found for:argoproj.io/v1alpha1#Workflow
我们需要进行自定义资源才能提交argo类型的yaml。
首先需要查看集群中已有的crd。
使用命令
kubectl get crd
kubectl get crd |grep argo
java中查询已有的crd代码如下:
try {
if (!client.supportsApiPath("/apis/apiextensions.k8s.io/v1beta1") && !client.supportsApiPath("/apis/apiextensions.k8s.io/v1")) {
System.out.println("WARNING this cluster does not support the API Group apiextensions.k8s.io");
return "fail";
}
CustomResourceDefinitionList list = client.customResourceDefinitions().list();
if (list == null) {
System.out.println("ERROR no list returned!");
return "fail";
}
List items = list.getItems();
for (CustomResourceDefinition item : items) {
System.out.println("CustomResourceDefinition " + item.getMetadata().getName() + " has version: " + item.getApiVersion());
}
} catch (KubernetesClientException e) {
System.out.println("Failed: " + e);
e.printStackTrace();
}
结果输出如下:
CustomResourceDefinition alertmanagers.monitoring.coreos.com has version: apiextensions/v1beta1
CustomResourceDefinition backups.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition backupstoragelocations.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition deletebackuprequests.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition downloadrequests.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition elasticsearchclusters.enterprises.upmc.com has version: apiextensions/v1beta1
CustomResourceDefinition podvolumebackups.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition podvolumerestores.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition prometheuses.monitoring.coreos.com has version: apiextensions/v1beta1
CustomResourceDefinition prometheusrules.monitoring.coreos.com has version: apiextensions/v1beta1
CustomResourceDefinition resticrepositories.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition restores.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition schedules.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition servicemonitors.monitoring.coreos.com has version: apiextensions/v1beta1
CustomResourceDefinition volumesnapshotlocations.ark.heptio.com has version: apiextensions/v1beta1
CustomResourceDefinition workflows.argoproj.io has version: apiextensions/v1beta1
找到argo的crd名称为workflows.argoproj.io。
初始化几个类
ArgoCluster.java
package com.biologic.entity;
import java.util.ArrayList;
import java.util.Map;
import io.fabric8.kubernetes.client.CustomResource;
public class ArgoCluster extends CustomResource{
private ArgoSpec spec;
private Map status;
@Override
public String toString() {
return "Argo{" +
"apiVersion='" + getApiVersion() + '\'' +
", metadata=" + getMetadata() +
", spec=" + spec +
'}';
}
public ArgoSpec getSpec() {
return spec;
}
public void setSpec(ArgoSpec spec) {
this.spec = spec;
}
public Map getStatus() {
return status;
}
public void setStatus(Map status) {
this.status = status;
}
}
ArgoSpec.java
package com.biologic.entity;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.annotation.JsonDeserialize;
import io.fabric8.kubernetes.api.model.KubernetesResource;
@JsonDeserialize(
using = JsonDeserializer.None.class
)
public class ArgoSpec implements KubernetesResource {
private Object entrypoint;
private Object arguments;
private Object templates;
private Object volumes;
public Object getEntrypoint() {
return entrypoint;
}
public void setEntrypoint(Object entrypoint) {
this.entrypoint = entrypoint;
}
public Object getArguments() {
return arguments;
}
public void setArguments(Object arguments) {
this.arguments = arguments;
}
public Object getTemplates() {
return templates;
}
public void setTemplates(Object templates) {
this.templates = templates;
}
public Object getVolumes() {
return volumes;
}
public void setVolumes(Object volumes) {
this.volumes = volumes;
}
}
ArgoList.java
package com.biologic.entity;
import io.fabric8.kubernetes.client.CustomResourceList;
public class ArgoList extends CustomResourceList{
}
DoneableArgo.java
package com.biologic.entity;
import io.fabric8.kubernetes.api.builder.Function;
import io.fabric8.kubernetes.client.CustomResourceDoneable;
public class DoneableArgo extends CustomResourceDoneable{
public DoneableArgo(ArgoCluster resource, Function function) {
super(resource, function);
}
}
调用crd的关键代码
Config config = new ConfigBuilder().withMasterUrl(masterurl).build();
KubernetesClient client = new DefaultKubernetesClient(config);
CustomResourceDefinition argoCRD = null;
try {
if (!client.supportsApiPath("/apis/apiextensions.k8s.io/v1beta1") && !client.supportsApiPath("/apis/apiextensions.k8s.io/v1")) {
System.out.println("WARNING this cluster does not support the API Group apiextensions.k8s.io");
return "fail";
}
CustomResourceDefinitionList list = client.customResourceDefinitions().list();
if (list == null) {
System.out.println("ERROR no list returned!");
return "fail";
}
List items = list.getItems();
for (CustomResourceDefinition item : items) {
System.out.println("CustomResourceDefinition " + item.getMetadata().getName() + " has version: " + item.getApiVersion());
if (ARGO_CRD_NAME.equals(item.getMetadata().getName())) {
argoCRD = item;
}
}
} catch (KubernetesClientException e) {
System.out.println("Failed: " + e);
e.printStackTrace();
}
if (argoCRD != null) {
System.out.println("Found CRD: " + argoCRD.getMetadata().getSelfLink());
} else {
return "fail";
}
MixedOperation> argoClient = client.customResources(argoCRD, ArgoCluster.class, ArgoList.class, DoneableArgo.class);
CustomResourceList argoList =argoClient.list();
List items = argoList.getItems();
System.out.println(" found " + items.size() + " argo");
for (ArgoCluster item : items) {
System.out.println(" " + item);
}
ArgoCluster createArgo= argoClient.load(stream).get();
ArgoCluster finishpod=argoClient.create(createArgo);
String podname=finishpod.getMetadata().getName();
Pod pod = client.pods().inNamespace(namespace).withName(podname).get();
if (pod !=null){
System.out.println("Creating pod in namespace " + pod.getMetadata().getNamespace());
System.out.println("Created pod " + podname);
} else {
System.err.println("not found a Pod! " + podname);
}
完整代码
package com.biologic.api;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.List;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.core.io.ClassPathResource;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import com.biologic.api.service.LogService;
import com.biologic.entity.ArgoCluster;
import com.biologic.entity.ArgoList;
import com.biologic.entity.DoneableArgo;
import io.fabric8.kubernetes.api.model.DoneablePod;
import io.fabric8.kubernetes.api.model.HasMetadata;
import io.fabric8.kubernetes.api.model.Pod;
import io.fabric8.kubernetes.api.model.PodList;
import io.fabric8.kubernetes.api.model.apiextensions.CustomResourceDefinition;
import io.fabric8.kubernetes.api.model.apiextensions.CustomResourceDefinitionList;
import io.fabric8.kubernetes.client.AppsAPIGroupClient;
import io.fabric8.kubernetes.client.AutoAdaptableKubernetesClient;
import io.fabric8.kubernetes.client.AutoscalingAPIGroupClient;
import io.fabric8.kubernetes.client.BatchAPIGroupClient;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.kubernetes.client.CustomResourceList;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.ExtensionsAPIGroupClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClientException;
import io.fabric8.kubernetes.client.dsl.MixedOperation;
import io.fabric8.kubernetes.client.dsl.NonNamespaceOperation;
import io.fabric8.kubernetes.client.dsl.PodResource;
import io.fabric8.kubernetes.client.dsl.Resource;
@Controller
@RequestMapping(path = "/report/generation/argo")
public class ArgoController {
public final static String SECRET_TOKEN_ARGO = "uiuda";
private static final String ARGO_CRD_NAME = "workflows.argoproj.io";
@Value("${env}")
private String env;
@Value("${kubernetes.master}")
private String masterurl;
@Autowired
private LogService logService;
@PostMapping(value = "/{chip}/quality-check")
@ResponseBody
public Object quality_check(@RequestParam(value = "token", required = true) String token,
@PathVariable("chip") String chip) {
if (!token.equals(SECRET_TOKEN_ARGO)) {
return "unauthorized";
}
logService.printK8sNow("当前环境 " + env);
String namespace = "default";
if ("sit".equals(env)) {
namespace = "sit";
}
try {
ClassPathResource yamlresource = new ClassPathResource("hello.yaml");
InputStream input=yamlresource.getInputStream();
StringBuffer sb = new StringBuffer();
readToBuffer(sb, input);
String fileContent = sb.toString();
String fileContent2 = fileContent.replaceFirst("value: hello", "value: '" + chip +"'");
String fileContent3 = fileContent2.replaceFirst("-random", "-" + chip);
String fileContent4 = fileContent3.replaceFirst("namespace: default", "namespace: " + namespace);
System.out.println(fileContent4);
InputStream stream = new ByteArrayInputStream(fileContent4.getBytes());
Config config = new ConfigBuilder().withMasterUrl(masterurl).build();
KubernetesClient client = new DefaultKubernetesClient(config);
CustomResourceDefinition argoCRD = null;
try {
if (!client.supportsApiPath("/apis/apiextensions.k8s.io/v1beta1") && !client.supportsApiPath("/apis/apiextensions.k8s.io/v1")) {
System.out.println("WARNING this cluster does not support the API Group apiextensions.k8s.io");
return "fail";
}
CustomResourceDefinitionList list = client.customResourceDefinitions().list();
if (list == null) {
System.out.println("ERROR no list returned!");
return "fail";
}
List items = list.getItems();
for (CustomResourceDefinition item : items) {
System.out.println("CustomResourceDefinition " + item.getMetadata().getName() + " has version: " + item.getApiVersion());
if (ARGO_CRD_NAME.equals(item.getMetadata().getName())) {
argoCRD = item;
}
}
} catch (KubernetesClientException e) {
System.out.println("Failed: " + e);
e.printStackTrace();
}
if (argoCRD != null) {
System.out.println("Found CRD: " + argoCRD.getMetadata().getSelfLink());
} else {
return "fail";
}
MixedOperation> argoClient = client.customResources(argoCRD, ArgoCluster.class, ArgoList.class, DoneableArgo.class);
CustomResourceList argoList =argoClient.list();
List items = argoList.getItems();
System.out.println(" found " + items.size() + " argo");
for (ArgoCluster item : items) {
System.out.println(" " + item);
}
ArgoCluster createArgo= argoClient.load(stream).get();
ArgoCluster finishpod=argoClient.create(createArgo);
String podname=finishpod.getMetadata().getName();
Pod pod = client.pods().inNamespace(namespace).withName(podname).get();
if (pod !=null){
System.out.println("Creating pod in namespace " + pod.getMetadata().getNamespace());
System.out.println("Created pod " + podname);
} else {
System.err.println("not found a Pod! " + podname);
}
return fileContent2;
} catch (IOException e) {
e.printStackTrace();
System.out.println("argo未找到配置文件");
}
return "ok";
}
/**
* 将文本文件中的内容读入到buffer中
*
* @param buffer
* buffer
* @param filePath
* 文件路径
* @throws IOException
* 异常
* @author cn.outofmemory
* @date 2013-1-7
*/
private void readToBuffer(StringBuffer buffer, InputStream input) throws IOException {
String line; // 用来保存每行读取的内容
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
line = reader.readLine(); // 读取第一行
while (line != null) { // 如果 line 为空说明读完了
buffer.append(line); // 将读到的内容添加到 buffer 中
buffer.append("\n"); // 添加换行符
line = reader.readLine(); // 读取下一行
}
reader.close();
input.close();
}
}
可能遇到的问题—Forbidden!Configured service account doesn’t have access和User “system:serviceaccount:default:default” cannot get path “/”
解决方案 一、增加role
当集群是基于RBAC(Role-Based Access Control,基于角色的访问控制)时,默认账户有很多的限制,比如不能支持组件,输出等等。
执行以下命令授权默认的服务账户在默认的namespace中拥有admin的权限
我们在本地提交argo submit之前会先运行语句如下:
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default
同理,在pod中也需要拿到权限。
pod拿到权限的方法之一就是通过role。
role分为两种 一种是命名空间的role,使用 rolebinding进行绑定。一种是集群的role,使用clusterrolebinding绑定。
我们可以根据自己的需求进行role的绑定。
因为我们这里会访问api的根目录,所以需要比较大的权限,使用clusterrolebinding绑定。
步骤如下:
首先创建一个report-api的pod专用的serviceaccount。
使用命令
vi report-api-account.yaml
输入内容如下:
apiVersion: v1
kind: ServiceAccount
metadata:
name: report-api
使用如下命令创建
kubectl create -f report-api-account.yaml
然后给default命名空间下的report-api赋予cluster-admin的角色。
在任意有kubectl命令行的环境中运行
kubectl create clusterrolebinding report-api-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:report-api
输出如下:
zhangxiaofans-MacBook-Pro:platform joe$ kubectl create clusterrolebinding report-api-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:report-api
clusterrolebinding.rbac.authorization.k8s.io "report-api-on-cluster-admin" created
最后在创建pod的yaml的depolyment中加入参数
serviceAccount: report-api
完成的depolyment.yaml示例如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: report-api
spec:
selector:
matchLabels:
app: report-api
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: report-api
spec:
serviceAccount: report-api
containers:
- name: report-api
image: 123/vpc/java8:latest
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["java -jar /jar/report-api-1.0.0-SNAPSHOT.jar --spring.profiles.active=beta"]
securityContext:
capabilities:
add:
- NET_ADMIN
ports:
- containerPort: 9988
volumeMounts:
- name: workdir
mountPath: /jar
env:
- name: USERDB_USERNAME_BETA
value: _USERDB_USERNAME_BETA_
- name: USERDB_PASSWORD_BETA
value: _USERDB_PASSWORD_BETA_
initContainers:
- name: pull-lib
image: anigeo/awscli:latest
command: ["/bin/sh","-c"]
args: ["ls"]
env:
- name: AWS_DEFAULT_REGION
value: cn-southwest-2
volumeMounts:
- name: workdir
mountPath: /jar
volumes:
- name: workdir
emptyDir: {}
参考链接。
clusterrolebinding的相关介绍
https://bugzilla.redhat.com/show_bug.cgi?id=1392767
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://github.com/kubernetes-incubator/external-storage/blob/master/ceph/rbd/deploy/rbac/rolebinding.yaml
https://stackoverflow.com/questions/47973570/kubernetes-log-user-systemserviceaccountdefaultdefault-cannot-get-services
https://github.com/fabric8io/fabric8/issues/6840
解决方案 二、传递token
pod拿到权限的方法之二就是通过token,拿到最高权限。
生成token的方法参考
https://jimmysong.io/kubernetes-handbook/guide/auth-with-kubeconfig-or-token.html
关于更多serviceaccount和token方面的理解可以参考
https://www.sharpcode.cn/devops/kubernetes-authn-authz/
https://tonybai.com/2017/03/03/access-api-server-from-a-pod-through-serviceaccount/
https://www.troyying.xyz/index.php/IT/8.html
https://docops.ca.com/ca-apm/10-7/cn/kubernetes-458828513.html
使用如下:
config = new ConfigBuilder().withMasterUrl(master)
.withTrustCerts(true)
.withNamespace(namespace)
.withOauthToken(cluster.getOauthToken())
.withUsername(cluster.getUsername())
.withPassword(cluster.getPassword())
.removeFromTlsVersions(TlsVersion.TLS_1_0)
.removeFromTlsVersions(TlsVersion.TLS_1_1)
.removeFromTlsVersions(TlsVersion.TLS_1_2)
.withRequestTimeout(REQUEST_TIMEOUT)
.withConnectionTimeout(CONNECTION_TIMEOUT)
.build();
可能遇到的问题–com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field “status” (class com.biologic.entity.ArgoCluster), not marked as ignorable (4 known properties: “spec”, “kind”, “apiVersion”, “metadata”])
原因 ArgoCluster 没有status字段。
在ArgoCluster 中增加status字段即可。
status的字段类型需要打断点查看如下图:
参考链接:
https://raw.githubusercontent.com/argoproj/argo/v2.2.0/manifests/install.yaml
Kubernetes CRDs 自定义资源
https://argoproj.github.io/docs/argo/docs/rest-api.html
https://github.com/kubernetes-client/java/blob/master/kubernetes/docs/CustomObjectsApi.md
https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/CRDExample.java
https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/ListCustomResourceDefinitions.java
https://juejin.im/post/5c108bb8e51d450c5a47c67f
https://blog.csdn.net/jiangpeng_xu/article/details/83688990
https://github.com/fabric8io/kubernetes-client
https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/CreatePod.java
https://blog.csdn.net/jiangpeng_xu/article/details/83688990
https://www.helplib.com/GitHub/article_128598
https://github.com/fabric8io/kubernetes-client/tree/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples
使用io.fabric8 client API 创建部署报错:“the server could not find the requested resource”
问题:
在使用部署文件进行部署时,发现总是报“the server could not find the requested resource”错误,后来发现原来是部署文件中声明的 apiVersion头的问题,
对于kubernetes 1.8.2 api, 创建部署文件时,使用
apiVersion:apps/v1beta2
或
apiVersion:apps/v1beta1
均会报上述错误。
public void testCreateDeploymentWithFile(){
String sfile = "E:\\deployment.yaml";
File file = new File(sfile);
FileInputStream fis = null;
try {
fis = new FileInputStream(file);
List result = client.load(fis).createOrReplace();
System.out.println("result:" + result);
} catch (Exception e) {
e.printStackTrace();
} finally{
if(fis != null){
try {
fis.close();
} catch (Exception e2) {
// TODO: handle exception
}
}
}
}
解决:
将apiVersion值换成
apiVersion: extensions/v1beta1
问题解决。
替换方式可以使用下面代码的简单替换方式:
String fileContent2 = fileContent.replaceFirst("^apiVersion:.*\r\nkind: Deployment",
"apiVersion: extensions/v1beta1\r\nkind: Deployment");
注意:
若部署文件中包含服务(service)部署描述,则服务的apiVersion不能为extensions/v1beta1, 应为v1,即:
apiVersion:v1
报错如下:
Caused by: java.lang.NoSuchMethodError: com.fasterxml.jackson.databind.ObjectWriter.forType(Lcom/fasterxml/jackson/databind/JavaType;)Lcom/fasterxml/jackson/databind/ObjectWriter
原因
spring boot有多个json数据转换包所导致,由于spring boot自带的是jackson的包,不需要在maven依赖里面重复添加此类转换包,添加会导致上面出现的异常。
解决方法
在kubernetes-client包中排除多余的jackson包
如下:
io.fabric8
kubernetes-client
4.1.1
com.fasterxml.jackson.core
jackson-databind
com.fasterxml.jackson.core
jackson-core
原因
缺少k8s环境的认证信息。
解决方式
该客户端需要运行在配置有k8sconfig的环境下运行,或者k8s的pod内。
yaml文件中需要指定namespace
参考:
1、Support complete app deployment with multiple pods, services using yaml file #906. https://github.com/fabric8io/kubernetes-client/issues/906
2、how to load yaml file #170. https://github.com/kubernetes-client/java/issues/170
转载请注明出处:java中提交argo工作流