PostgreSQL数据库——Pigsty pg_exporter
Prometheus架构和生态
组件说明
- Prometheus server : 用于抓取和存储时间序列数据
- 用于检测应用程序代码的客户端库
- push gateway:支持短时任务
- 针对特定服务的Exporter,如HAProxy, StatsD, Graphite等
- 一个处理告警的告警管理器
- 各种工具
Prometheus 直接或者间接通过gateway拉去指标度量值,并将拉去的样本存储在本地。从拉去的样本数据中按照规则聚合并产生新的时序数据,或者告警数据。通过Grafana将收集的数据可视化。
如何为中间件开发Exporter
Prometheus 为开发这提供了客户端工具,用于为自己的中间件开发Exporter,对接Prometheus。目前支持的客户端Go、Java、Python、Ruby。
依赖包的引入
工程结构
[root@node1 data]# tree exporter/
exporter/
├── collector
│ └── node.go
├── go.mod
└── main.go
1 directory, 3 files
引入依赖包
require (github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirectgithub.com/modern-go/reflect2 v1.0.1 // indirectgithub.com/prometheus/client_golang v1.1.0//借助gopsutil 采集主机指标github.com/shirou/gopsutil v0.0.0-20190731134726-d80c43f9c984
)
main.go
package main
import ("cloud.io/exporter/collector""fmt""github.com/prometheus/client_golang/prometheus""github.com/prometheus/client_golang/prometheus/promhttp""net/http"
)
func init() {//注册自身采集器prometheus.MustRegister(collector.NewNodeCollector())
}
func main() {http.Handle("/metrics", promhttp.Handler())if err := http.ListenAndServe(":8080", nil); err != nil {fmt.Printf("Error occur when start server %v", err)}
}
为了能看清结果我将默认采集器注释,位置registry.go
func init() {//MustRegister(NewProcessCollector(ProcessCollectorOpts{}))//MustRegister(NewGoCollector())
}
/collector/node.go 代码中涵盖了Counter、Gauge、Histogram、Summary四种情况,一起混合使用的情况,具体的说明见一下代码中。
package collector
import ("github.com/prometheus/client_golang/prometheus""github.com/shirou/gopsutil/host""github.com/shirou/gopsutil/mem""runtime""sync"
)
var reqCount int32
var hostname string
type NodeCollector struct {requestDesc *prometheus.Desc //CounternodeMetrics nodeStatsMetrics //混合方式 goroutinesDesc *prometheus.Desc //GaugethreadsDesc *prometheus.Desc //GaugesummaryDesc *prometheus.Desc //summaryhistogramDesc *prometheus.Desc //histogrammutex sync.Mutex
}
//混合方式数据结构
type nodeStatsMetrics []struct {desc *prometheus.Desceval func(*mem.VirtualMemoryStat) float64valType prometheus.ValueType
}
//初始化采集器
func NewNodeCollector() prometheus.Collector {host,_:= host.Info()hostname = host.Hostnamereturn &NodeCollector{requestDesc: prometheus.NewDesc("total_request_count","请求数",[]string{"DYNAMIC_HOST_NAME"}, //动态标签名称prometheus.Labels{"STATIC_LABEL1":"静态值可以放在这里","HOST_NAME":hostname}),nodeMetrics: nodeStatsMetrics{{desc: prometheus.NewDesc("total_mem","内存总量",nil, nil),valType: prometheus.GaugeValue,eval: func(ms *mem.VirtualMemoryStat) float64 { return float64(ms.Total) / 1e9 },},{desc: prometheus.NewDesc("free_mem","内存空闲",nil, nil),valType: prometheus.GaugeValue,eval: func(ms *mem.VirtualMemoryStat) float64 { return float64(ms.Free) / 1e9 },},},goroutinesDesc:prometheus.NewDesc("goroutines_num","协程数.",nil, nil),threadsDesc: prometheus.NewDesc("threads_num","线程数",nil, nil),summaryDesc: prometheus.NewDesc("summary_http_request_duration_seconds","summary类型",[]string{"code", "method"},prometheus.Labels{"owner": "example"},),histogramDesc: prometheus.NewDesc("histogram_http_request_duration_seconds","histogram类型",[]string{"code", "method"},prometheus.Labels{"owner": "example"},),}
}
// Describe returns all descriptions of the collector.
//实现采集器Describe接口
func (n *NodeCollector) Describe(ch chan<- *prometheus.Desc) {ch <- n.requestDescfor _, metric := range n.nodeMetrics {ch <- metric.desc}ch <- n.goroutinesDescch <- n.threadsDescch <- n.summaryDescch <- n.histogramDesc
}
// Collect returns the current state of all metrics of the collector.
//实现采集器Collect接口,真正采集动作
func (n *NodeCollector) Collect(ch chan<- prometheus.Metric) {n.mutex.Lock()ch <- prometheus.MustNewConstMetric(n.requestDesc,prometheus.CounterValue,0,hostname)vm, _ := mem.VirtualMemory()for _, metric := range n.nodeMetrics {ch <- prometheus.MustNewConstMetric(metric.desc, metric.valType, metric.eval(vm))}ch <- prometheus.MustNewConstMetric(n.goroutinesDesc, prometheus.GaugeValue, float64(runtime.NumGoroutine()))num, _ := runtime.ThreadCreateProfile(nil)ch <- prometheus.MustNewConstMetric(n.threadsDesc, prometheus.GaugeValue, float64(num))//模拟数据ch <- prometheus.MustNewConstSummary(n.summaryDesc,4711, 403.34,map[float64]float64{0.5: 42.3, 0.9: 323.3},"200", "get",)//模拟数据ch <- prometheus.MustNewConstHistogram(n.histogramDesc,4711, 403.34,map[float64]uint64{25: 121, 50: 2403, 100: 3221, 200: 4233},"200", "get",)n.mutex.Unlock()
}
执行的结果http://127.0.0.1:8080/metrics
pg_exporter
https://api.github.com/repos/Vonng/pg_exporter/releases/latest
[Prometheus](https://prometheus.io/) [exporter](https://prometheus.io/docs/instrumenting/exporters/) for [PostgreSQL](https://www.postgresql.org) metrics. **Gives you complete insight on your favourate elephant!**
PG Exporter is the foundation component for Project [Pigsty](https://pigsty.cc), Which maybe the best **OpenSource** Monitoring Solution for PostgreSQL.
Latest binaries & rpms can be found on [release](https://github.com/Vonng/pg_exporter/releases) page. Supported pg version: PostgreSQL 9.4+ & Pgbouncer 1.8+. Default collectors definition is compatible with PostgreSQL 10,11,12,13,14.
Latest stable `pg_exporter` version is: `0.3.2` , and latest beta `pg_exporter` version is: `0.4.0beta` .
如何运行
- Where to scrape: A postgres or pgbouncer URL given via
PG_EXPORTER_URL
or--url
- What to scrape: A path to config file or directory, by default
./pg_exporter.yaml
or/etc/pg_exporter
export PG_EXPORTER_URL='postgres://postgres:password@localhost:5432/postgres'
export PG_EXPORTER_CONFIG='/path/to/conf/file/or/dir'
pg_exporter
Run
Parameter could be given via command line arguments or environment variables.
usage: pg_exporter [<flags>]Flags:--help Show context-sensitive help (also try --help-long and --help-man).--url=URL postgres target url--config=CONFIG path to config dir or file--label="" constant lables:comma separated list of label=value pair--tag="" tags,comma separated list of server tag--disable-cache force not using cache--disable-intro disable collector level introspection metrics--auto-discovery automatically scrape all database for given server--exclude-database="template0,template1,postgres"excluded databases when enabling auto-discovery--include-database="" included databases when enabling auto-discovery--namespace="" prefix of built-in metrics, (pg|pgbouncer) by default--fail-fast fail fast instead of waiting during start-up--web.listen-address=":9630" prometheus web server listen address--web.telemetry-path="/metrics"URL path under which to expose metrics.--dry-run dry run and print raw configs--explain explain server planned queries--version Show application version.--log.level="info" Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal]--log.format="logger:stderr" Set the log target and format. Example: "logger:syslog?appname=bob&local=7" or "logger:stdout?json=true"
API
Here are pg_exporter
REST APIs
# Fetch metrics (metrics path depends on parameters)
curl localhost:9630/metrics
# Reload configuration
curl localhost:9630/reload
# Explain configuration
curl localhost:9630/explain
# Aliveness health check (200 up, 503 down)
curl localhost:9630/up
curl localhost:9630/health
curl localhost:9630/liveness
curl localhost:9630/readiness
# traffic route health check
### 200 if not in recovery, 404 if in recovery, 503 if server is down
curl localhost:9630/primary
curl localhost:9630/leader
curl localhost:9630/master
curl localhost:9630/read-write
curl localhost:9630/rw
### 200 if in recovery, 404 if not in recovery, 503 if server is down
curl localhost:9630/replica
curl localhost:9630/standby
curl localhost:9630/slave
curl localhost:9630/read-only
curl localhost:9630/ro
### 200 if server is ready for read traffic (including primary), 503 if server is down
curl localhost:9630/read
主函数
func Run() {ParseArgs()// explain config onlyif *dryRun {DryRun()}if *configPath == "" {log.Errorf("no valid config path, exit")os.Exit(1)}// DummyServer will server a constant pg_up// launch a dummy server to check listen address availability// and fake a pg_up 0 metrics before PgExporter connecting to target instance// otherwise, exporter API is not available until target instance onlinedummySrv, closeChan := DummyServer()// create exporter: if target is down, exporter creation will wait until it backup onlinevar err errorPgExporter, err = NewExporter(*pgURL,WithConfig(*configPath),WithConstLabels(*constLabels),WithCacheDisabled(*disableCache),WithFailFast(*failFast),WithNamespace(*exporterNamespace),WithAutoDiscovery(*autoDiscovery),WithExcludeDatabase(*excludeDatabase),WithIncludeDatabase(*includeDatabase),WithTags(*serverTags),)if err != nil {log.Fatalf("fail creating pg_exporter: %s", err.Error())os.Exit(2)}// trigger a manual planning before explainif *explainOnly {PgExporter.server.Plan()fmt.Println(PgExporter.Explain())os.Exit(0)}prometheus.MustRegister(PgExporter)defer PgExporter.Close()// reload conf when receiving SIGHUP or SIGUSR1sigs := make(chan os.Signal, 1)signal.Notify(sigs, syscall.SIGHUP)go func() {for sig := range sigs {switch sig {case syscall.SIGHUP:log.Infof("%v received, reloading", sig)_ = Reload()}}}()/*************** REST API ***************/// basichttp.HandleFunc("/", TitleFunc)http.HandleFunc("/version", VersionFunc)// reloadhttp.HandleFunc("/reload", ReloadFunc)// explainhttp.HandleFunc("/explain", PgExporter.ExplainFunc)// alivehttp.HandleFunc("/up", PgExporter.UpCheckFunc)http.HandleFunc("/read", PgExporter.UpCheckFunc)http.HandleFunc("/health", PgExporter.UpCheckFunc)http.HandleFunc("/liveness", PgExporter.UpCheckFunc)http.HandleFunc("/readiness", PgExporter.UpCheckFunc)// primaryhttp.HandleFunc("/primary", PgExporter.PrimaryCheckFunc)http.HandleFunc("/leader", PgExporter.PrimaryCheckFunc)http.HandleFunc("/master", PgExporter.PrimaryCheckFunc)http.HandleFunc("/read-write", PgExporter.PrimaryCheckFunc)http.HandleFunc("/rw", PgExporter.PrimaryCheckFunc)// replicahttp.HandleFunc("/replica", PgExporter.ReplicaCheckFunc)http.HandleFunc("/standby", PgExporter.ReplicaCheckFunc)http.HandleFunc("/slave", PgExporter.ReplicaCheckFunc)http.HandleFunc("/read-only", PgExporter.ReplicaCheckFunc)http.HandleFunc("/ro", PgExporter.ReplicaCheckFunc)// metric_ = dummySrv.Close()<-closeChanhttp.Handle(*metricPath, promhttp.Handler())log.Infof("pg_exporter for %s start, listen on http://%s%s", shadowDSN(*pgURL), *listenAddress, *metricPath)log.Fatal(http.ListenAndServe(*listenAddress, nil))
}
PostgreSQL数据库——Pigsty pg_exporter相关推荐
- 高手过招,精彩纷呈:PostgreSQL数据库人才与业务生态应用论坛圆满落幕
经过数月紧密筹备,第二届长沙·中国1024程序员节于2021年10月23日在湖南省长沙市重磅开幕.本次大会聚焦行业内的多个领域,如果你最感兴趣的领域是数据库,那你一定不能错过10月23日下午的Post ...
- pg数据库开启远程连接_如何运行远程客户端连接postgresql数据库
如何运行远程客户端连接 postgresql 数据库 前提条件是 2 个: 1 , pg_hba.conf 里面配置了运行远程客户机连接 pg_hba.conf 配置后需要重新加载 reload 生效 ...
- 数据库服务器 之 PostgreSQL数据库的日常维护工作
来自:LinuxSir.Org 摘要:为了保持所安装的 PostgreSQL 服务器平稳运行, 我们必须做一些日常性的维护工作.我们在这里讨论的这些工作都是经常重复的事情, 可以很容易地使用标准的 U ...
- Centos 7环境下源码安装PostgreSQL数据库
马上就要去实习了,工作内容是搞数据仓库方面的,用的是postgresql关系型数据库,于是自己先来了解下这种数据的用法,之后说说这个数据库和MySQL的关系和区别. 1.Postgresql简介 看了 ...
- Ubuntu安装、使用postgresql数据库
Ubuntu安装.使用postgresql数据库 $ sudo apt-get install postgresql (端口为5432) $ sudo apt-get install postgre ...
- [原创]Silverlight与PostgreSQL数据库的互操作(CURD完全解析)
今天将为大家介绍如何让Silverlight使用PostgreSQL作为后台数据库以及CURD操作. 准备工作 1)建立起测试项目 细节详情请见强大的DataGrid组件[2]_数据交互之ADO.NE ...
- postgresql数据库的数据导出
一.pg_dump的用法: 数据库的导入导出是最常用的功能之一,每种数据库都提供有这方面的工具,例如Oracle的exp/imp,Informix的dbexp/dbimp,MySQL的mysqldum ...
- 解决postgresql数据库localhost可以连接,ip连接不了的问题
解决postgresql数据库localhost可以连接,ip连接不了的问题 参考文章: (1)解决postgresql数据库localhost可以连接,ip连接不了的问题 (2)https://ww ...
- 忘了PostgreSQL数据库的密码的解决方案
问题:忘了PostgreSQL数据库的密码应该如何解决? 解决方法: 首先打开data目录下的pg_hba.conf配置文件, 找到: # IPv4 local connections: host a ...
- Entity Freamwork 6连接PostgreSql数据库
原文 Entity Freamwork 6连接PostgreSql数据库 开发环境 VS 2015 Update 1 Postgre Sql 9.4 使用过程 1.使用Nuget在项目中添加对E ...
最新文章
- ASP.NET ViewState 初探
- c mysql如何获取照片_MYSQL数据库存取图片等文件(C语言)
- leetcode算法题--机器人的运动范围
- 丢包和网络延迟对网络性能的影响
- ubuntu12.04LTS安装已经共享的打印机
- 50-overlay 如何实现跨主机通信?
- shell 做加法运算_使用shell脚本实现加法乘法运算
- html css 样式中100%width 仍有白边解决办法
- php之include的使用
- 常见的php后门,有趣的PHP后门
- P4213 【模板】杜教筛(杜教筛)题解
- python 部署模型,关于python:机器学习模型python在线服务部署的两种实例
- (转)JD-Quant量化交易平台设计:延迟latency
- Agilent N5766A Power Supply 输出电压电流监测程序
- 详述 Kafka 基本原理
- js事件坐标大乱斗:screenX、clientX、pageX、offsetX
- 解决Oracle报错ORA-01653: 表xx无法通过 8192 (在表空间 xx_data 中) 扩展
- ThinkPhp6+Vue大数据分析后台管理系统
- 何为 Linux 内核开发,怎么学好 Linux 内核?
- 杭电计算机考研经验交流