在你的集群中,你可能已经有一个Fluentd 后台进程正在运行,像 here and here所描述的插件,或者你的集群提供的特定内容。这可以通过配置发送日志到Elasticsearch 系统或日志提供者。
你可能使用这些Fluentd 后台进程,或其他你建立的Fluentd 后台进程,只要它们监听转发日志,并且Mixer能够和它们建立连接。为了Mixer能够连接一个运行的Fluentd 后台进程,你可能需要添加一个Fluentd 的 service 。监听转发日志的 Fluentd 配置如下:
<source>
type forward
source>
将Mixer连接到所有可能的Fluentd配置的完整细节不在本task讨论范围内。
Example Fluentd,Elasticsearch,Kibana Stack
为了达到这个task的目的,你可能要部署提供的示例栈。这个栈包括 Fluentd, Elasticsearch, and Kibana ,它们都位于一个名为logging 的新命名空间,是一组不在生产环境中的 Services and Deployments 。
保存下面内容到 logging-stack.yaml.
# Logging Namespace. All below are a part of this namespace.
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
# Elasticsearch Service
apiVersion: v1
kind: Service
metadata:
name: elasticsearch namespace: logging labels: app: elasticsearch
spec:
ports: - port: 9200 protocol: TCP targetPort: db selector: app: elasticsearch
---
# Elasticsearch Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elasticsearch namespace: logging labels: app: elasticsearch annotations: sidecar.istio.io/inject: "false"
spec:
template: metadata: labels: app: elasticsearch spec: containers: - image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1 name: elasticsearch resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: discovery.type value: single-node ports: - containerPort: 9200 name: db protocol: TCP - containerPort: 9300 name: transport protocol: TCP volumeMounts: - name: elasticsearch mountPath: /data volumes: - name: elasticsearch emptyDir: {}
---
# Fluentd Service
apiVersion: v1
kind: Service
metadata:
name: fluentd-es namespace: logging labels: app: fluentd-es
spec:
ports: - name: fluentd-tcp port: 24224 protocol: TCP targetPort: 24224 - name: fluentd-udp port: 24224 protocol: UDP targetPort: 24224 selector: app: fluentd-es
---
# Fluentd Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fluentd-es namespace: logging labels: app: fluentd-es annotations: sidecar.istio.io/inject: "false"
spec:
template: metadata: labels: app: fluentd-es spec: containers: - name: fluentd-es image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: config-volume mountPath: /etc/fluent/config.d terminationGracePeriodSeconds: 30 volumes: - name: config-volume configMap: name: fluentd-es-config
---
# Fluentd ConfigMap, contains config files.
kind: ConfigMap
apiVersion: v1
data:
forward.input.conf: |- # Takes the messages sent over TCP type forward output.conf: |- type elasticsearch log_level info include_tag_key true host elasticsearch port 9200 logstash_format true # Set the chunk limits. buffer_chunk_limit 2M buffer_queue_limit 8 flush_interval 5s # Never wait longer than 5 minutes between retries. max_retry_wait 30 # Disable the limit on the number of retries (retry forever). disable_retry_limit # Use multiple threads for processing. num_threads 2
metadata:
name: fluentd-es-config namespace: logging
---
# Kibana Service
apiVersion: v1
kind: Service
metadata:
name: kibana namespace: logging labels: app: kibana
spec:
ports: - port: 5601 protocol: TCP targetPort: ui selector: app: kibana
---
# Kibana Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana namespace: logging labels: app: kibana annotations: sidecar.istio.io/inject: "false"
spec:
template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana-oss:6.1.1 resources: # need more cpu upon initialization, therefore burstable class limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 name: ui protocol: TCP
---
新建资源:
kubectl apply -f logging-stack.yaml
你将看到如下内容:
namespace"logging" created
service "elasticsearch" created
deployment "elasticsearch" created
service "fluentd-es" created
deployment "fluentd-es" created
configmap "fluentd-es-config" created
service "kibana" created
deployment "kibana" created
Created config logentry/istio-system/newlog at revision 22374
Created config fluentd/istio-system/handler at revision 22375
Created config rule/istio-system/newlogtofluentd at revision 22376
Tomcat的组成部分 1、server
A Server element represents the entire Catalina servlet container. (Singleton) 2、service
service包括多个connector以及一个engine,其职责为处理由connector获得的客户请求。
3、connector
一个connector
基本概念: 1.OOP中唯一关系的是对象的接口是什么,就像计算机的销售商她不管电源内部结构是怎样的,他只关系能否给你提供电就行了,也就是只要知道can or not而不是how and why.所有的程序是由一定的属性和行为对象组成的,不同的对象的访问通过函数调用来完成,对象间所有的交流都是通过方法调用,通过对封装对象数据,很大限度上提高复用率。 2.OOP中最重要的思想是类,类是模板是蓝图,
由于明天举要上课,所以刚刚将代码敲了一遍PL/SQL的函数和包体的实现(单例模式过几天好好的总结下再发出来);以便明天能更好的学习PL/SQL的循环,今天太累了,所以早点睡觉,明天继续PL/SQL总有一天我会将你永远的记载在心里,,,
函数;
函数:PL/SQL中的函数相当于java中的方法;函数有返回值
定义函数的
--输入姓名找到该姓名的年薪
create or re
/*
*编写控制结构
*/
--条件分支语句
--简单条件判断
DECLARE
v_sal NUMBER(6,2);
BEGIN
select sal into v_sal from emp
where lower(ename)=lower('&name');
if v_sal<2000 then
update emp set
public class CollectionDemo implements Serializable,Comparable<CollectionDemo>{
private static final long serialVersionUID = -2958090810811192128L;
private int id;
private String nam