kubernetes是google公司基于docker所做的一个分布式集群,有以下主件组成
etcd: 高可用存储共享配置和服务发现,作为与minion机器上的flannel配套使用,作用是使每台 minion上运行的docker拥有不同的ip段,最终目的是使不同minion上正在运行的docker containner都有一个与别的任意一个containner(别的minion上运行的docker containner)不一样的IP地址。
flannel: 网络结构支持
kube-apiserver: 不论通过kubectl还是使用remote api 直接控制,都要经过apiserver
kube-controller-manager: 对replication controller, endpoints controller, namespace controller, and serviceaccounts controller的循环控制,与kube-apiserver交互,保证这些controller工作
kube-scheduler: Kubernetes scheduler的作用就是根据特定的调度算法将pod调度到指定的工作节点(minion)上,这一过程也叫绑定(bind)
kubelet: Kubelet运行在Kubernetes Minion Node上. 它是container agent的逻辑继任者
kube-proxy: kube-proxy是kubernetes 里运行在minion节点上的一个组件, 它起的作用是一个服务代理的角色
图为GIT+Jenkins+Kubernetes+Docker+Etcd+confd+Nginx+Glusterfs架构:
1、环境介绍及准备:
1.1 物理机操作系统
物理机操作系统采用Centos7.4 64位,细节如下。
[root@etcd ~]# uname -a Linux etcd 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux [root@etcd ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core)
1.2 主机信息
本文准备了五台机器用于部署k8s的运行环境(可把Master、etcd、registry放在同一台上),细节如下:
| 节点 | 主机名 | IP |
| Etcd | etcd | 10.0.0.10 |
| Kubernetes Master | k8s-master | 10.0.0.11 |
| Kubernetes Node 1 | k8s-node-1 | 10.0.0.12 |
| Kubernetes Node 2 | k8s-node-2 | 10.0.0.13 |
| Kubernetes Node 3 | k8s-node-3 | 10.0.0.14 |
设置五台机器的主机名:
Etcd上执行:
[root@localhost ~]# hostnamectl --static set-hostname etcd
Master上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-master
Node1上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-node-1
Node2上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-node-2
Node3上执行:
[root@localhost ~]# hostnamectl --static set-hostname k8s-node-3
在五台机器上设置hosts,均执行如下命令:
echo '10.0.0.10 etcd #添加hosts解析 10.0.0.11 k8s-master 10.0.0.12 k8s-node-1 10.0.0.13 k8s-node-2 10.0.0.14 k8s-node-3 10.0.0.10 kube-registry' >> /etc/hosts
1.3 关闭五台机器上的防火墙
systemctl disable firewalld.service systemctl stop firewalld.service
安装NTP并启动
# yum -y install ntp # systemctl start ntpd # systemctl enable ntpd
2、部署etcd
k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:
[root@etcd ~]# yum install etcd -y
yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息:
# [member] ETCD_NAME=master #<----修改这行 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #修改这行,2379是默认的使用端口,为了防止端口占用问题的出现,增加4001端口备用。 #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001" #<----修改这行 #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY=""
启动并验证状态
[root@etcd ~]# systemctl enable etcd [root@etcd ~]# systemctl start etcd [root@etcd ~]# etcdctl set testdir/testkey0 0 0 [root@etcd ~]# etcdctl get testdir/testkey0 0 [root@etcd ~]# etcdctl -C http://etcd:4001 cluster-health member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379 cluster is healthy [root@etcd ~]# etcdctl -C http://etcd:2379 cluster-health member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379 cluster is healthy
3、部署master
3.1 安装Docker
[root@k8s-master ~]# yum install -y docker
配置Docker配置文件,使其允许从registry中拉取镜像。
[root@k8s-master ~]# vim /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi OPTIONS='--insecure-registry registry:5000'
设置开机自启动并开启服务
[root@k8s-master ~]# chkconfig docker on Note: Forwarding request to 'systemctl enable docker.service'. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@k8s-master ~]# service docker start Redirecting to /bin/systemctl start docker.service
3.2 安装kubernets
[root@k8s-master ~]# yum install -y kubernetes
3.3 配置并启动kubernetes
在kubernetes master上需要运行以下组件:
Kubernets API Server
Kubernets Controller Manager
Kubernets Scheduler
相应的要更改以下几个配置中带颜色部分信息:
3.3.1 /etc/kubernetes/apiserver
[root@k8s-master ~]# cat /etc/kubernetes/apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #<----修改这行 # The port on the local server to listen on. KUBE_API_PORT="--port=8080" #<----修改这行 # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" #<----修改这行 # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" #<----修改这行 # Add your own! KUBE_API_ARGS=""
3.3.2 /etc/kubernetes/config
[root@k8s-master ~]# cat /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://k8s-master:8080" #<----修改这行
启动服务并设置开机自启动
[root@k8s-master ~]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@k8s-master ~]# systemctl start kube-apiserver.service [root@k8s-master ~]# systemctl enable kube-controller-manager.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@k8s-master ~]# systemctl start kube-controller-manager.service [root@k8s-master ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@k8s-master ~]# systemctl start kube-scheduler.service
4、部署node
4.1 安装docker
参见3.1
[root@k8s-node-1 ~]# yum install -y docker
配置Docker配置文件,使其允许从registry中拉取镜像。
[root@k8s-node-1 ~]# vim /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi OPTIONS='--insecure-registry kube-registry:5000' #添加这行kube-registry:5000,需要/etc/hosts解析
[root@k8s-master ~]# chkconfig docker on [root@k8s-master ~]# service docker start
4.2 安装kubernets
参见3.2
[root@k8s-master ~]# yum install -y kubernetes
4.3 配置并启动kubernetes
在kubernetes node上需要运行以下组件:
Kubelet
Kubernets Proxy
相应的要更改以下几个配置文中带颜色部分信息:
4.3.1 /etc/kubernetes/config
[root@k8s-node-1 ~]# cat /etc/kubernetes/config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://k8s-master:8080" #修改
4.3.2 /etc/kubernetes/kubelet
[root@k8s-node-1 ~]# cat /etc/kubernetes/kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" #修改 # The port for the info server to serve on # KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=k8s-node-1" #修改 # location of the api-server KUBELET_API_SERVER="--api-servers=http://k8s-master:8080" #修改 # pod infrastructure container #KUBELET_POD_INFRA_CONTAINER:Pod 启动时的一个基础容器,这里使用自建的pod-infrastructure #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kube-registry:5000" #修改,上面默认的基础容器无法下载 # Add your own! KUBELET_ARGS=""
启动服务并设置开机自启动
[root@k8s-node-1 ~]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@k8s-node-1 ~]# systemctl start kubelet.service [root@k8s-node-1 ~]# systemctl enable kube-proxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@k8s-node-1 ~]# systemctl start kube-proxy.service
4.4 查看状态,查询Kubernetes的健康状态
在master上查看集群中节点及节点状态
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node NAME STATUS AGE k8s-node-1 Ready 5m [root@k8s-master ~]# kubectl get nodes NAME STATUS AGE k8s-node-1 Ready 5m
其他的k8s-node-2,k8s-node-3只需修改:
[root@k8s-node-2 ~]# vi /etc/kubernetes/kubelet ...... # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=k8s-node-2" ......
k8s-node-1,k8s-node-2,k8s-node-3节点部署node后,完成后查看状态:
#<---查询Kubernetes Master各组件的健康状态 [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node NAME STATUS AGE k8s-node-1 Ready 40m k8s-node-2 Ready 5m k8s-node-3 Ready 23s #<---查询Kubernetes Node的健康状态 [root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node NAME STATUS AGE k8s-node-1 Ready 3d k8s-node-2 Ready 3d k8s-node-3 Ready 3d [root@k8s-master ~]# kubectl get nodes NAME STATUS AGE k8s-node-1 Ready 40m k8s-node-2 Ready 5m k8s-node-3 Ready 25s
至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。
5、创建覆盖网络——Flannel
5.1 安装Flannel
在master、node上均执行如下命令,进行安装
[root@k8s-master ~]# yum install -y flannel
5.2 配置Flannel
master、node上均编辑/etc/sysconfig/flanneld,修改红色部分
[root@k8s-master ~]# cat /etc/sysconfig/flanneld # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://etcd:2379" #<----修改 # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS=""
5.3 配置etcd中关于flannel的key
Kubernetes的网络模型要求每一个Pod都拥有一个扁平化共享网络命名空间的IP,称为PodIP,Pod能够直接通过PodIP跨网络与其他物理机和Pod进行通信。要实现Kubernetes的网络模型,需要在Kubernetes集群中创建一个覆盖网络(Overlay Network),联通各个节点,目前可以通过第三方网络插件来创建覆盖网络,比如Flannel和Open vSwitch。
Flannel会为不同Node的Docker网桥配置不同的IP网段以保证Docker容器的IP在集群内唯一,所以Flannel会重新配置Docker网桥,需要先删除原先创建的Docker网桥(每台机器需要执行):
# iptables -t nat -F # ifconfig docker0 down # brctl delbr docker0
Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
[root@etcd ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/24" }' { "Network": "10.0.0.0/24" } [root@etcd ~]# etcdctl get /atomic.io/network/config { "Network": "10.0.0.0/24" }
[root@k8s-node-1 ~]# systemctl status flanneld.service #不能和物理机取相同的网段 ● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2018-05-31 17:01:42 CST; 4s ago Process: 74016 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS) Main PID: 73955 (flanneld) Memory: 16.7M CGroup: /system.slice/flanneld.service └─73955 /usr/bin/flanneld -etcd-endpoints=http://etcd:2379 -etcd-prefix=/atomic.io/network --iface=ens33 May 31 17:01:36 k8s-node-1 flanneld-start[73955]: E0531 17:01:36.996778 73955 network.go:102] failed to retrieve network config: 100: Key not... [11633] May 31 17:01:38 k8s-node-1 flanneld-start[73955]: E0531 17:01:38.000029 73955 network.go:102] failed to retrieve network config: 100: Key not... [11633] May 31 17:01:39 k8s-node-1 flanneld-start[73955]: E0531 17:01:39.003675 73955 network.go:102] failed to retrieve network config: 100: Key not... [11633] May 31 17:01:40 k8s-node-1 flanneld-start[73955]: E0531 17:01:40.006161 73955 network.go:102] failed to retrieve network config: 100: Key not... [11633] May 31 17:01:41 k8s-node-1 flanneld-start[73955]: E0531 17:01:41.007743 73955 network.go:102] failed to retrieve network config: 100: Key not... [11633] May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.016595 73955 local_manager.go:179] Picking subnet in range 10.0.1.0 ... 10.0.255.0 May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.055551 73955 manager.go:250] Lease acquired: 10.0.74.0/24 May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.056475 73955 network.go:98] Watching for new subnet leases May 31 17:01:42 k8s-node-1 flanneld-start[73955]: I0531 17:01:42.081693 73955 network.go:191] Subnet added: 10.0.0.128/25 May 31 17:01:42 k8s-node-1 systemd[1]: Started Flanneld overlay address etcd agent. Hint: Some lines were ellipsized, use -l to show in full.
[root@etcd ~]# etcdctl rm /atomic.io/network/config PrevNode.Value: { "Network": "10.0.0.0/24" } [root@etcd ~]# etcdctl get /atomic.io/network/config Error: 100: Key not found (/atomic.io/network/config) [11633]
所以设置成不同的网段
(例子:
etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16", "SubnetMin": "172.17.1.0", "SubnetMax": "172.17.254.0"}'
[root@etcd ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }' { "Network": "10.0.0.0/16" } [root@etcd ~]# etcdctl get /atomic.io/network/config { "Network": "10.0.0.0/16" }
5.4 启动
启动Flannel之后,需要依次重启docker、kubernete。
在master执行:
systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kube-apiserver.service systemctl restart kube-controller-manager.service systemctl restart kube-scheduler.service
在node上执行:
systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kubelet.service systemctl restart kube-proxy.service
kube-apiserver:位于master节点,接受用户请求。
kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。
kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,Nodecontroller等。
etcd:分布式键值存储系统,共享整个集群的资源对象信息。
kubelet:位于node节点,负责维护在特定主机上运行的pod。
kube-proxy:位于node节点,它起的作用是一个服务代理的角色。
工作流程:
① kubectl 发送部署请求到 API Server。
② API Server 通知 Controller Manager 创建一个 deployment 资源。
③ Scheduler 执行调度任务,将两个副本 Pod 分发到 k8s-node1 和 k8s-node2。
④ k8s-node1 和 k8s-node2 上的 kubectl 在各自的节点上创建并运行 Pod。
[root@k8s-master ~]# kubectl get cs #查询各个组件的健康状态 NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
---------------------------------------
[root@k8s-master ~]# kube-apiserver --version
Kubernetes v1.5.2
[root@k8s-master ~]# flanneld -version
0.7.1
[root@etcd ~]# etcd -version
etcd Version: 3.2.18
Git SHA: eddf599
Go Version: go1.9.4
Go OS/Arch: linux/amd64
--------------------------------------------
推广:
Flannel会为不同Node的Dokcer网桥配置不同的IP网段以保证Docker容器的IP在集群内唯一。
所以Flannel会重新配置Docker网桥。需要删除原先创建的Dokcer网桥。
[root@k8s-master ~]# ip link set docker0 down //关闭docker0网桥 [root@k8s-master ~]# ip link delete docker0 //删除docker0网桥
[root@k8s-master ~]# ip a s 1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f0:be:d7 brd ff:ff:ff:ff:ff:ff inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::c0f:685e:be9c:a067/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::25bc:f2cb:4ac9:4aff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::5095:3b33:58b4:46a4/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: flannel0: mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 10.0.20.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::2ab2:fede:73d1:9c7/64 scope link flags 800 valid_lft forever preferred_lft forever
[root@k8s-master ~]# systemctl restart flanneld docker
操作完成后,可以进行验证。
每个Node会有docker0和flannel0这2个网卡。这2个网卡在不同的Node下都是不同的.
[root@k8s-master ~]# ip a s 1: lo:mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:f0:be:d7 brd ff:ff:ff:ff:ff:ff inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::c0f:685e:be9c:a067/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::25bc:f2cb:4ac9:4aff/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::5095:3b33:58b4:46a4/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 4: flannel0: mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 10.0.20.0/16 scope global flannel0 valid_lft forever preferred_lft forever inet6 fe80::48ef:305f:a9af:6b4c/64 scope link flags 800 valid_lft forever preferred_lft forever 5: docker0: mtu 1500 qdisc noqueue state DOWN link/ether 02:42:62:01:91:e2 brd ff:ff:ff:ff:ff:ff inet 10.0.20.1/24 scope global docker0 valid_lft forever preferred_lft forever
[root@k8s-master ~]# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1
[root@k8s-master ~]# kubectl get pod -o wide
No resources found.