docker 命令2

docker build -t dvm.adsplatformproxy:v1.0.0 .      #build images
docker run -e WWNamespace=dev -e ZKServerAddress=******  -p 6000:80  6cb913a34ae3    #run container,本地起进程
docker run -ti 6cb913a34ae3 /bin/bash   #远程进入image文件;exit退出
docker rmi -f b54d6e186ef4  #远程删除image
docker rmi -rf b54d6e186ef4
docker build -t dvm.adsplatformproxy:v1.0.0 .      #build images
docker run -e WWNamespace=dev -e ZKServerAddress=******  -p 6000:80  6cb913a34ae3    #run container,本地起进程
docker run -ti 6cb913a34ae3 /bin/bash   #远程进入image文件;exit退出
docker rmi -f b54d6e186ef4  #远程删除image
docker rmi -rf b54d6e186ef4

 查看docker仓库的tag信息

For the latest (as of 2015-07-31) version of Registry V2, you can get this image from DockerHub:

docker pull distribution/registry:master

List all repositories (effectively images):

curl -X GET https://myregistry:5000/v2/_catalog
> {"repositories":["redis","ubuntu"]}

List all tags for a repository:

curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
> {"name":"ubuntu","tags":["14.04"]}
 kubectl get deployment  #获取发布
 kubectl delete deployment ****  #删除发布,自动删除service、pods

  

 

kubectl create ******.yaml  #创建yaml,包含config、ingress配置
kubectl get ingress   #获取ingress配置
kubectl get configmap  #获取config配置

 

kubectl edit configmaps *****-config -n *namespace* 

kubectl get configmap  #1.获取配置文件
kubectl edit configmap ******    #2.修改配置文件
 ..\refreshconfig.ps1 -ConfigMapName dvm-website-config    #3.psl更新配置   或者删除已有的pod,会自动创建pod并使用新的configmap
kubectl create -f rc-nginx.yaml 

kubectl replace -f rc-nginx.yaml

kubectl edit po rc-nginx-btv4j    

kubectl delete -f rc-nginx.yaml   
kubectl delete po rc-nginx-btv4j 
kubectl delete po -lapp=nginx-2    



kubectl describe po rc-nginx-2-btv4j  

# kubectl get namespace  
kubectl get po  -o #yaml 以yawl格式输出pod的详细信息。
kubectl get po  -o json # 以json格式输出pod的详细信息。
kubectl get po rc-nginx-2-btv4j -o=custom-columns=LABELS:.metadata.labels.app  #使用”-o=custom-columns=“定义直接获取指定内容的值

  

  docker 命令2_第1张图片

  

 docker 命令2_第2张图片 

 

configmap 的更新

A new command, kubectl rolling-restart that takes an RC name and incrementally deletes all the pods controlled by the RC and allows the RC to recreate them.

 

Small work around (I use deployments and I want to change configs without having real changes in image/pod):

  • create configMap
  • create deployment with ENV variable (you will use it as indicator for your deployment) in any container
  • update configMap
  • update deployment (change this ENV variable)

k8s will see that definition of the deployment has been changed and will start process of replacing pods
PS:
if someone has better solution, please share

 

 

It feels like the right solution here would enable you to restart a deployment, and reuse most of the deployment parameters for rollouts like MinReadyCount, while allowing for command-line overrides like increasing the parallelism for emergency situations where you need everything to bounce immediately.

 

 

We would also like to see this for deployments maybe like kubectl restart deployment some-api

 

 

Kubernetes is allowed to restart Pods for all sorts of reasons, but the cluster admin isn't allowed to.
I understand the moral stand that 'turn it off and on again' may not be a desired way to operate... but I also think it should be ok to let those people who wish to, to restart a Deployment without resorting to the range of less appetizing tricks like:

  • deleting pods
  • dummy labels
  • dummy environment variables
  • dummy config maps mapped to environment variable
  • rebooting the worker nodes
  • cutting the power to the data centre ?

'No, no, I'm not restarting anything, just correcting a typo in this label here' ?

 

 

This feature will be useful in pair with kubectl apply: apply will update configs, including Replication Controllers, but pods won't be restarted.

 

 

Could you explain? Should I just use kubectl apply -f new_config.yml with updated deployments, and these deployments will be rolling-restarted?

 

 

kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"mycontainer","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'

make sure the date is evaluated within the shell:

 

 

should be app responsibility to watch filesystem for changes, as mentioned you can use checksums on the configmap/secret and force restarts that way

but if you don't want to change the config at all and just do a rolling restart with arbitrary pause, a simple pipeline does the job (this one sleeps 30seconds between terminated pod)

kubectl get po -l release=my-app -o name | cut -d"/" -f2 | while read p;do kubectl  delete po $p;sleep 30;done

alternative command:

kubectl get pods|grep somename|awk '{print $1}' | xargs -i sh -c 'kubectl delete pod -o name {} && sleep 4'

 

 

Two and half years on and people are still crafting new workarounds, with dummy env vars, dumy labels, ConfigMap and Secret watcher sidecars, scaling to zero, and straight out rolling-update shell scripts to simulate the ability the trigger a rolling update. Is this still something cluster admins should not be allowed to do honestly, without the tricks?

kubectl scale --replicas=0 deployment application
kubectl scale --replicas=1 deployment application

Another trick is to intially run:

kubectl set image deployment/my-deployment mycontainer=myimage:latest

and then:

kubectl set image deployment/my-deployment mycontainer=myimage

It will actually be triggering the rolling-update but be sure you have also imagePullPolicy: "Always" set.

another trick I found, where you don't have to change the image name, is to change the value of a field that will trigger a rolling update, like terminationGracePeriodSeconds. You can do this using kubectl edit deployment your_deployment or kubectl apply -f your_deployment.yaml or using a patch like this:

kubectl patch deployment your_deployment -p \
  '{"spec":{"template":{"spec":{"terminationGracePeriodSeconds":31}}}}'

# Force an upgrade even though the docker-compose.yml for the services didn't change
$ rancher-compose up --force-upgrade

 

You can always write a custom pid1 that notices the confimap has changed and restarts your app.

You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.

using a deployment I would scale it down and then up. You will still have that small amount of down time though. You can do it in one line to reduce that...

kubectl scale deployment/update-demo --replicas=0;

kubectl scale deployment/update-demo --replicas=4;

If you don't want to find all the pods, and don't care about downtime - just remove the RC and then re-create the RC.

 

 

The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.

When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

Not quite as quick as just editing the ConfigMap in place, but much safer.

 

 

Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

 Yes. In order to do a rolling update, both the previous and new versions of the configmap must simultaneously exist.

I assume the "main configmap" would be embedded in Helm's representation of the chart. The live configmaps would be the configmap versions in use, and should be garbage collected with the replicasets generated by Deployment.

 

 

 

 

 

 

 

 

 

 

 

 

镜像可以在命令行生成,如:
 docker build . -t *******:v1.0.0.3
随后Tag到Docker仓库:
 docker tag  *******:v1.0.0.3 [ip]:5000/[path]:v1.0.0.3
如果确认无误后要推送到仓库,可以执行:
 docker push [ip]:5000/[path]:v1.0.0.3:v1.0.0.3
如果要在VS中生成镜像,需要考虑把Docker-compose设置为启动项目,并至少启动项目一次。每次需要重新生成镜像时,建议先在Docker-Compose项目中先清理,再生成。
随后的Tag和Push参考上面的流程。

 

 

1、 在开发机上安装Docker,并将[ip]:5000设置为insecure registry
2、 下载kubectl.exe,并把相关的路径放入到环境变量Path中
3、 把相关的crt和key文件放入到用户根目录的.kube目录下,如C:\Users\user01\.kube
4、 打开命令行,在配置文件中新建cluster,如
kubectl config  set-cluster bjoffice --server=https://[ip]:6443 --insecure-skip-tls-verify
5、 将客户信息绑定
kubectl.exe config set-credentials username --client-certificate=username.crt --client-key=username.key
6、 绑定用户Context
kubectl config set-context username-context --cluster=[] --namespace=yourns --user=username
7、 切换用户当前Context
kubectl config use-context username-context

             设定完成后大家就可以按照容器模式开发和管理微服务了。
             附件中包括kubernetes控制工具,相应的证书文件和示范的yaml.

 


1、 熟悉基于容器的微服务开发和打包(Dev)
2、 熟悉容器的部署
3、 制定相关开发规范
4、 制定管理规范,如自动发布/回滚机制和监控
5、 部署群集环境并运营
6、 考虑用不同的开发堆栈提高效率
7、 总结提高

 

 

Dev:
Build and push the image
Create the yaml
Manage dev configmap/secrets
Build custom base images if necessary
May has the view access to the online production*(TBD)

Test:
Test base on the images/yaml
Manage the test configmap/secrets
Tag the release branch/image
Push the yaml/image to production repository (manual script/CI)

OPS:
Infrastructure management(Nodes)
Online resource monitoring(CPU/Memory)
Manage the production configmap/secrets
Nodes & deployment management if necessary
Replicator control if necessary

 

 

Yaml书写应该分成三部分:
1、 Config
2、 Deployment and service
3、 Ingress
具体大家可以参考Docs\Yaml\Deploy下的Sample。
请需要更新,只更新相关部分并尽量避免delete/create方式更新,考虑使用edit模式编辑。

 

在某些情况下,可能需要跳过部署直接以交互模式调试镜像(如Job),可以直接用kubectl run命令,ran参数后面是deployment 名称:
                          kubectl -n dev run downloadimagetest --image=[ip]:5000/[path]:v1.0.0.6 -- /usr/bin/tail -f /dev/null
             然后以kubecl exec -ti  podname /bin/bash 命令登陆这个pod进行调试。
             请注意这个过程不会设置yaml中的环境参数,如果需要请额外设置。
             如果只是配置上的调整,无需delete/create整个yaml,设置deployment的image参数就可以了。
                           kubectl -n dev set image deployment/******-deployment dvm-adsproxyapi=[ip]:5000/[patch]:v1.0.0.412
             或者用edit命令人工编辑(会弹出记事本或者vi):
                           kubectl -n dev edit deployment/******-deployment
             如果在开发过程中无法Push/Pull 镜像,请考虑重新启动Docker。

 

 

如果需要利用Visual Studio 生成Docker镜像,需要将目标项目设置为Docker-compose,同时将模式设置为Release。
在Debug模式下,VS只会生成很少量的dll, 供VS使用。
目前我们的Dockerfile的Sample中,默认从./obj/Docker/publish中复制文件(这个目录包含所有dll)。
如果需要人工Publish但又不想build Solution级别的Docker-compose,则可以在项目目录(Dockerfile文件所在位置)执行以下命令
dotnet publish ******.WebApi.csproj -c Release -o ./obj/Docker/publish
其中项目名称和模式请根据实际替换。
然后用docker build -t tagname . 打包。
大家也可以写.cmd文件直接把这两步合并到一块。

 

 


大家在开发的使用经常遇到需要部署后调试的问题,这种情况下大家应该考虑启用本地Docker容器,如:
docker run -e WWNamespace=[] -e ZKServerAddress=[ip1],[ip2],[ip3]  -p 5000:80 5aed1c78f55e
其中-e 是设置环境变量,-p 是设置端口映射,第一个端口是主机端口,第二个端口是容器的端口(对于WebApi通常是80),最后一个端口是容器编号。
如果要在后台运行,可以加入-d参数,之后结束这个进程即可。

 

 

 

"file integrity checksum failed" while pushing image to registry

1.docker system prune -a        ,solved the problem,重启docker

2.或者docker =》settings=》advance:提高内存

3.Had exactly the same problema. It worked after I deleted the images and rebuilt

转载于:https://www.cnblogs.com/panpanwelcome/p/8472651.html

你可能感兴趣的:(运维,shell,json)