离线环境安装docker,k8s,prometheus-operator 之本地yum源搭建

服务器信息:
1.1.1.1 可连接公网
1.1.1.5 内网服务器 k8s master
1.1.1.6 内网服务器 k8s node
1.1.1.7 内网服务器 k8s node
1.1.1.8 内网服务器 k8s node

docker版本: 20.10.17
k8s版本: v1.23.5
prometheus-operator版本: v0.53.1

篇幅一. 本地yum源搭建

yum_server:1.1.1.1
yum_client: 1.1.1.5,1.1.1.6,1.1.1.7,1.1.1.8

1.nginx搭建

cat nginx.conf
user  root;
worker_processes  8;
events {
    worker_connections  4096;
}
http {
    server {
       listen 33333;
       proxy_connect_timeout 60;
            location / {
                add_header Cache-Control no-store;
                root  /data/yum;
                autoindex on;
}
}
}


2.拉包

2.1修改yum配置文件
cat /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

2.2创建本地仓库
mkdir -p /data/yum/data/BaseOS/Packages
2.3拉包
yum update
yum clean all
yum makecache

#注: 此命令并不是全量下载依赖包以及安装包,而是检测 系统未安装 的包 才会下载
yum install --downloadonly --downloaddir=/data/yum/data/BaseOS/Packages/  docker-ce kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5

#如果想强制下载某个已安装的包,需使用以下命令
yum reinstall --downloadonly --downloaddir=/data/yum/data/BaseOS/Packages/ libtevent
2.4安装包全览

用2.3方法可以按需下载包
本人整个安装过程用到的包大约如下:

ll | awk -F' ' '{print $9}'
07433570e95a2782cc127e659fe6df434db7f88805e2aed6067768d2f32cb809-cri-tools-1.24.2-0.x86_64.rpm
96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm
audit-libs-python-2.8.5-4.el7.x86_64.rpm
checkpolicy-2.5-8.el7.x86_64.rpm
conntrack-tools-1.4.4-7.el7.x86_64.rpm
containerd.io-1.6.7-3.1.el7.x86_64.rpm
container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm
db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
docker-ce-20.10.17-3.el7.x86_64.rpm
docker-ce-cli-20.10.17-3.el7.x86_64.rpm
docker-ce-rootless-extras-20.10.17-3.el7.x86_64.rpm
docker-scan-plugin-0.17.0-3.el7.x86_64.rpm
fuse3-libs-3.6.1-4.el7.x86_64.rpm
fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
gssproxy-0.7.0-30.el7_9.x86_64.rpm
keyutils-1.5.8-3.el7.x86_64.rpm
libbasicobjects-0.1.1-32.el7.x86_64.rpm
libcgroup-0.41-21.el7.x86_64.rpm
libcollection-0.7.0-32.el7.x86_64.rpm
libevent-2.0.21-4.el7.x86_64.rpm
libini_config-1.3.1-32.el7.x86_64.rpm
libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
libnfsidmap-0.25-19.el7.x86_64.rpm
libpath_utils-0.2.1-32.el7.x86_64.rpm
libref_array-0.1.5-32.el7.x86_64.rpm
libsemanage-python-2.5-14.el7.x86_64.rpm
libtalloc-2.1.16-1.el7.x86_64.rpm
libtevent-0.9.39-1.el7.x86_64.rpm
libtirpc-0.2.4-0.16.el7.x86_64.rpm
libverto-tevent-0.2.5-4.el7.x86_64.rpm
mailx-12.5-19.el7.x86_64.rpm
net-tools-2.0-0.25.20131004git.el7.x86_64.rpm
nfs-utils-1.3.0-0.68.el7.2.x86_64.rpm
policycoreutils-python-2.5-34.el7.x86_64.rpm
python-IPy-0.75-6.el7.noarch.rpm
quota-4.01-19.el7.x86_64.rpm
quota-nls-4.01-19.el7.noarch.rpm
repodata
rpcbind-0.2.0-49.el7.x86_64.rpm
setools-libs-3.3.8-4.el7.x86_64.rpm
slirp4netns-0.4.3-4.el7_8.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
tcp_wrappers-7.6-77.el7.x86_64.rpm

3.本地仓库建立

cd /data/yum/data/
createrepo .
#如果有新增包放进/data/yum/data/BaseOS/Packages/,可用如下命令更新
cd /data/yum/data/
createrepo --update .

4.yum客户端配置

选择配置方式为 jenkins + ansible 批量部署

4.1 tree /home/jenkins/ansible_workspace
/home/jenkins/ansible_workspace
├── deploy_process.yml
├── environments
│   ├── colony
│   │   ├── inventory
│   │   └── vars.yml
├── roles
│   ├── yum_client_install
│       ├── tasks
│       │   └── main.yml
│       └── templates
│           └── local.repo.j2
└── utils_install.yml
4.2 cat /home/jenkins/ansible_workspace/utils_install.yml
- hosts: "{{ hosts }}"
  gather_facts: "{{ gather_facts }}"
  serial: "{{ serial }}"
  user: "{{ user_name }}"
  vars:
    serial: 100
    gather_facts: "yes"
  vars_files:
    - "{{ ansibleHome }}/environments/{{ env }}/vars.yml"
  roles:
    - "{{ util }}"
4.3 cat /home/jenkins/ansible_workspace/environments/colony/inventory
[k8s_master]
1.1.1.5

[k8s_node]
1.1.1.6
1.1.1.7
1.1.1.8
4.4 cat /home/jenkins/ansible_workspace/roles/yum_client_install/tasks/main.yml
---
- name: judge local.repo exist
  shell: ls /etc/yum.repos.d/local.repo> /dev/null
  ignore_errors: True
  register: local_repo

- name: clear other repo
  shell: cd /etc/yum.repos.d/ && ls|grep -v local.repo|xargs rm -rf {}
  ignore_errors: True

- name: create /etc/yum.repos.d
  file: name=/etc/yum.repos.d state=directory owner=root group=root mode=0755

- name: scp local.repo
  template:
    src: local.repo.j2
    dest: /etc/yum.repos.d/local.repo
  when: local_repo is failed

- name: update yum
  shell: "yum clean all; yum makecache"
  when: local_repo is failed

4.5 cat /home/jenkins/ansible_workspace/roles/yum_client_install/templates/local.repo.j2
[base]
name=k8s
baseurl=http://1.1.1.1:33333/data
gpgcheck=0
enabled=1
4.6 jenkins job 参数
image.png
4.7 jenkins job shell 构建内容
ansibleHome='/home/jenkins/ansible_workspace'
cd ${ansibleHome}
ansible-playbook utils_install.yml  -i environments/${environment}/inventory -e "hosts=${hosts} user_name=${user_name} env=${environment} ansibleHome=${ansibleHome} util=yum_client_install"

你可能感兴趣的:(离线环境安装docker,k8s,prometheus-operator 之本地yum源搭建)